WO2024055816A1 - 图像处理方法和电子设备 - Google Patents

图像处理方法和电子设备 Download PDF

Info

Publication number
WO2024055816A1
WO2024055816A1 PCT/CN2023/114005 CN2023114005W WO2024055816A1 WO 2024055816 A1 WO2024055816 A1 WO 2024055816A1 CN 2023114005 W CN2023114005 W CN 2023114005W WO 2024055816 A1 WO2024055816 A1 WO 2024055816A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
color
image processing
brightness
Prior art date
Application number
PCT/CN2023/114005
Other languages
English (en)
French (fr)
Inventor
乔晓磊
肖斌
李怀乾
赵志忠
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2024055816A1 publication Critical patent/WO2024055816A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present application relates to the field of images, specifically, to an image processing method and electronic equipment.
  • the present application provides an image processing method and electronic device, which can remove image banding problems and/or local color cast problems in images and improve image quality.
  • an image processing method is provided, which is applied to an electronic device.
  • the light source of the shooting environment in which the electronic device is located is a stroboscopic light source.
  • the image processing method includes:
  • the first image is an image of a photographed object collected based on a first exposure time, the photographic object is a moving object, the first exposure time is less than a first duration, and the first image includes stripes and/or color cast image areas;
  • a first operation is detected, and the first operation instructs the electronic device to take pictures or videos;
  • a color conversion matrix and/or a brightness parameter are obtained.
  • the color conversion matrix is used to perform color adjustment on the first image
  • the brightness parameter is used to adjust the color of the third image. Adjust the brightness of an image
  • the light source of the shooting environment where the electronic device is located is a stroboscopic light source. Since the electronic device needs to take pictures of moving objects, thus reducing the exposure time, banding stripes appear in the image.
  • the embodiment of the present application by performing color migration and brightness migration on the first image (exposed image) based on the second image (for example, a normal exposure image), to obtain a color conversion matrix and/or brightness parameters; based on the color conversion matrix and/or brightness parameters , remove stripes and/or color cast image areas in the first image; ensure that when collecting images of moving subjects at the moment of movement, stripes and/or color cast image areas in the image are removed to improve image quality.
  • some implementations of the first aspect also include:
  • the bilateral grid data includes the color conversion matrix and/or the brightness parameter, and the size of the first image is the same as the size of the second image.
  • the image processing model can be used to obtain a bilateral grid that transfers the color and brightness of the image frame obtained with normal exposure time or long exposure time to the image frame with short exposure time; in a stroboscopic light source with an alternating current of 50HZ
  • the normal exposure time or long exposure time is an integer multiple of 10ms, there are usually no image stripes or color cast image areas in the long exposure image and the normal exposure image
  • based on the bilateral grid output by the image processing model we get Color conversion matrix and/or brightness parameters; when performing color migration and brightness migration on the first image through the image processing model, the difference in image content between the first image and the second image can be identified; therefore, the image content obtained through the image processing model
  • the color conversion matrix and/or brightness parameters are obtained, and ghost areas will not be introduced when performing color adjustment and/or brightness adjustment on the first image, thereby improving image quality.
  • performing first image processing on the first image based on the color conversion matrix and/or brightness parameters to obtain a third image includes:
  • the first image is interpolated based on the color conversion matrix and/or the brightness parameter to obtain the third image.
  • the color conversion matrix and/or the brightness parameter can be multiplied by the matrix of the first image to obtain the third image.
  • performing interpolation processing on the first image based on the color conversion matrix and/or brightness parameters to obtain the third image includes:
  • the first color space perform the first image processing on the first image based on the color conversion matrix and/or the brightness parameter to obtain a processed image
  • second image processing is performed on the processed image to obtain the third image, and the second image processing is a color processing algorithm in the first color space.
  • the algorithm of the first color space when the image signal processor processes the Raw image collected by the electronic device, when executing the algorithm of the first color space, the algorithm of the first color space may be first executed based on the color conversion matrix and/or the brightness parameter. An image is subjected to first image processing and then other algorithms of the first color space are executed; since the first image processing can obtain an image with banding stripes and/or color cast image areas removed; in other words, the first image processing can perform the image processing on the image.
  • Correction of color and brightness when the accuracy of color and brightness is high, then executing other algorithms in the first color space can ensure that the accuracy of color and brightness of the image is high, and the removal of banding stripes and /Or the image in the color cast image area is processed by other algorithms in the first color space, thereby improving the image quality.
  • the method before acquiring the second image, the method further includes: detecting the shooting scene in which the electronic device is located, detecting the moving object; and detecting The stripes and/or color cast image areas are present in the first image.
  • the size of the first duration is obtained based on the number of times the strobe light source lights up and darkens per second.
  • the first duration 1000/the number of times the strobe light source lights up and darkens per second.
  • the number of times the stroboscopic light source turns on and off per second is associated with the frequency of the operating voltage of the stroboscopic light source.
  • the frequency of the working voltage of the stroboscopic light source is 50HZ, that is, the number of times the stroboscopic light source lights up and darkens per second is 100 times, and the first duration is 10ms; at this time, the exposure time of the first image Less than 10ms, the exposure time of the second image is an integral multiple of 10ms.
  • the frequency of the working voltage of the stroboscopic light source is 60HZ, that is, the number of times the stroboscopic light source lights up and darkens per second is 120 times, and the first duration is 8.3ms; at this time, the exposure of the first image The time is less than 8.3ms, and the exposure time of the second image is an integral multiple of 8.3ms.
  • the image processing model is a convolutional neural network.
  • the image processing model is HDRnet.
  • the image processing model is trained by the following method:
  • the sample data includes a first sample image, a second sample image and a third sample image
  • the second sample image includes the image content, stripes and/or color cast of the first sample image Image area
  • the third sample image and the first sample image have the same image content
  • the image quality of the third sample image is higher than the image quality of the first sample image
  • the image processing model to be trained is trained based on the difference between the predicted image and the third sample image to obtain the image processing model.
  • the electronic device includes an image signal processor, and the first image is an image output by the image signal processor.
  • the first image may be an image output by the image signal processor; since the first image is an image output by the signal processor, the image signal processor may perform denoising processing on the short-exposure Raw image, therefore although the first image includes banding stripes, the image details in the first image are richer.
  • the second image is an image obtained by performing third image processing on the Raw image collected by the electronic device, and the third image processing includes color space conversion. deal with.
  • the second image is a normal exposure image; for example, the second image can be an image obtained by downsampling and color space conversion of a normally exposed Raw image; since in this application, the color information and brightness information in the second image are needed to migrate the first image, so the requirements for the detailed information in the second image are lower, that is, the second image does not need to go through the image signal processor. processing.
  • an electronic device in a second aspect, includes one or more processors and a memory; the memory is coupled to the one or more processors, and the memory is used to store computer program code.
  • the computer program code includes computer instructions, one or more Multiple processors invoke computer instructions to cause the electronic device to:
  • the first image is an image of a photographed object collected based on a first exposure time, the photographic object is a moving object, the first exposure time is less than a first duration, and the first image includes stripes and/or color cast image areas;
  • a first operation is detected, and the first operation instructs the electronic device to take pictures or videos;
  • a color conversion matrix and/or a brightness parameter are obtained.
  • the color conversion matrix is used to perform color adjustment on the first image
  • the brightness parameter is used to adjust the color of the third image. Adjust the brightness of an image
  • one or more processors invoke computer instructions to cause the electronic device to execute:
  • the bilateral grid data includes the color conversion matrix and/or the brightness parameter, and the size of the first image is the same as the size of the second image.
  • one or more processors invoke computer instructions to cause the electronic device to execute:
  • the first image is interpolated based on the color conversion matrix and/or the brightness parameter to obtain the third image.
  • one or more processors invoke computer instructions to cause the electronic device to execute:
  • the first color space perform the first image processing on the first image based on the color conversion matrix and/or the brightness parameter to obtain a processed image
  • second image processing is performed on the processed image to obtain the third image, and the second image processing is a color processing algorithm in the first color space.
  • one or more processors call computer instructions to cause the electronic device to perform: detect the shooting scene where the electronic device is located, detect the moving object ; and detecting the presence of the stripes and/or color cast image areas in the first image.
  • the size of the first duration is based on the The number of times a strobe light source turns on and off per second is obtained.
  • the first duration 1000/the number of times the strobe light source turns on and off per second.
  • the number of times the stroboscopic light source turns on and off per second is associated with the frequency of the operating voltage of the stroboscopic light source.
  • the image processing model is a convolutional neural network.
  • the image processing model is trained by the following method:
  • the sample data includes a first sample image, a second sample image and a third sample image
  • the second sample image includes the image content, stripes and/or color cast of the first sample image Image area
  • the third sample image and the first sample image have the same image content
  • the image quality of the third sample image is higher than the image quality of the first sample image
  • the image processing model to be trained is trained based on the difference between the predicted image and the third sample image to obtain the image processing model.
  • the electronic device includes an image signal processor, and the first image is an image output by the image signal processor.
  • the second image is an image obtained by performing third image processing on the Raw image collected by the electronic device, and the third image processing includes color space conversion. deal with.
  • an electronic device including a module/unit for executing the image processing method in the first aspect or any implementation of the first aspect.
  • a fourth aspect provides an electronic device, the electronic device comprising one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, The computer program code includes computer instructions, and the one or more processors call the computer instructions to cause the electronic device to execute the image processing method in the first aspect or any implementation of the first aspect.
  • a chip system is provided.
  • the chip system is applied to an electronic device.
  • the chip system includes one or more processors.
  • the processor is used to call computer instructions to cause the electronic device to execute the first aspect. Or any image processing method in the first aspect.
  • a computer-readable storage medium stores computer program code.
  • the electronic device causes the electronic device to execute the first aspect or the first aspect.
  • the image processing method in any implementation manner.
  • a computer program product includes: computer program code.
  • the computer program code When the computer program code is run by an electronic device, the electronic device causes the electronic device to execute the first aspect or any of the first aspects.
  • An image processing method in an implementation manner.
  • the light source in the shooting environment where the electronic device is located is a stroboscopic light source, because the electronic device needs to shoot moving objects.
  • banding for example, banding stripes and/or color cast image areas
  • the color migration and brightness of the short exposure image are performed based on the normal exposure image.
  • the color conversion matrix and/or brightness parameters are obtained based on the bilateral grid output by the image processing model; when performing color migration and brightness migration on the first image through the image processing model, the first image and the The deviation area where the image content differs greatly between the two images, that is, the image processing model can identify the ghost area between the second image and the first image; therefore, the color conversion matrix and/or brightness parameters obtained through the image processing model , ghost areas will not be introduced when color adjustment and/or brightness adjustment is performed on the first image, thereby improving image quality.
  • Figure 1 is a schematic diagram of a hardware system suitable for electronic equipment of the present application
  • Figure 2 is a schematic diagram of a software system suitable for the electronic device of the present application.
  • Figure 3 is a schematic diagram of an application scenario suitable for the embodiment of the present application.
  • Figure 4 is a schematic diagram of a graphical user interface suitable for embodiments of the present application.
  • Figure 5 is a schematic flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 6 is a schematic flow chart of an image signal processor processing method provided by an embodiment of the present application.
  • FIG7 is a schematic flow chart of an image processing method provided in an embodiment of the present application.
  • FIG8 is a schematic flow chart of a method for training an image processing model provided in an embodiment of the present application.
  • Figure 9 is a schematic diagram of a graphical user interface suitable for embodiments of the present application.
  • Figure 10 is a schematic diagram of a graphical user interface suitable for embodiments of the present application.
  • Figure 11 is a schematic diagram of a graphical user interface suitable for embodiments of the present application.
  • Figure 12 is a schematic diagram of a graphical user interface suitable for embodiments of the present application.
  • Figure 13 is a schematic diagram of a graphical user interface suitable for embodiments of the present application.
  • Figure 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 15 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the energy transmitted in the AC power grid is not stable, but changes with a fixed frequency. This frequency is generally called the power frequency; the energy change caused by the power frequency is called a flicker.
  • image striping banding phenomenon In the shooting environment of stroboscopic light source, the phenomenon that the image sensor in the electronic device captures the flicker and forms strips on the image is called image striping banding phenomenon, which can usually be called banding.
  • alternating current that is, a charged light source that turns on and off 100 times per second
  • the exposure integration period can offset the banding
  • the exposure time of the electronic device does not meet 10ms is an integer multiple of , the amount of light entering when collecting images will fluctuate according to the AC sine wave pattern, causing regular stripes to appear in the image.
  • the bilateral grid is essentially a data structure; for example, for a single-channel grayscale value, the bilateral grid can be a three-dimensional array obtained by combining the two-dimensional spatial domain information of the image and the one-dimensional grayscale information.
  • Exposure time refers to the time it takes for light to hit the film or photoreceptor from when the camera shutter opens to when it closes.
  • Neural network refers to a network formed by connecting multiple single neural units together, that is, the output of one neural unit can be the input of another neural unit; the input of each neural unit can be connected to the local receptive field of the previous layer, To extract the features of the local receptive field, the local receptive field can be an area composed of several neural units.
  • a convolutional neural network is a deep neural network with a convolutional structure.
  • the convolutional neural network contains a feature selector composed of a convolutional layer and a subsampling layer, which can be regarded as a filter.
  • the convolutional layer refers to the neuron layer in the convolutional neural network that convolves the input signal.
  • a neuron can be connected to only some of the neighboring layer neurons.
  • a convolutional layer usually contains several feature planes, and each feature plane can be composed of some rectangularly arranged neural units. Neural units in the same feature plane share weights, and the shared weights here are convolution kernels. Shared weights can be understood as a way to extract image information independent of position.
  • the convolution kernel can be initialized in the form of a random-sized matrix. During the training process of the convolutional neural network, the convolution kernel can obtain reasonable weights through learning. In addition, the direct benefit of sharing weights is to reduce the connections between the layers of the convolutional neural network, while reducing the risk of overfitting.
  • HDRNet is a typical 3D interpolation grid; for example, the spatial domain can be divided into 16x16, and the value domain can be divided into 8 intervals.
  • the neural network can use the error back propagation (BP) algorithm to modify the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forward propagation of the input signal until the output will produce an error loss, and the parameters in the initial neural network model are updated by backpropagating the error loss information, so that the error loss converges.
  • the backpropagation algorithm is a backpropagation movement dominated by error loss, aiming to obtain the optimal parameters of the neural network model, such as the weight matrix.
  • Figure 1 shows a hardware system suitable for the electronic device of the present application.
  • the electronic device 100 may be a mobile phone, a smart screen, a tablet, a wearable electronic device, a vehicle-mounted electronic device, an augmented reality (AR) device, a virtual reality (VR) device, a laptop, An ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (PDA), a projector, etc.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • projector etc.
  • the embodiment of the present application does not place any restrictions on the specific type of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure shown in FIG. 1 does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or less components than those shown in FIG. 1 , or the electronic device 100 may include a combination of some of the components shown in FIG. 1 , or , the electronic device 100 may include sub-components of some of the components shown in FIG. 1 .
  • the components shown in Figure 1 may be implemented in hardware, software, or a combination of software and hardware.
  • Processor 110 may include one or more processing units.
  • the processor 110 may include at least one of the following processing units: an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processing unit (NPU).
  • an application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processing unit NPU
  • different processing units can be independent devices or integrated devices.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the processor 110 may include at least one of the following interfaces: an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, pulse code modulation, PCM) interface, universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM interface, USB interface.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • the processor 110 may be configured to execute the image processing method provided by the embodiment of the present application; for example, a camera application of an electronic device; display the first image, and the first image is based on the first image.
  • An image of the photographed object collected during an exposure time the photographed object is a moving object, the first exposure time is less than the first duration, the first image includes stripes and/or color cast image areas; the first operation is detected, and the first operation indicates the electronic
  • the device shoots or records video; in response to the first operation, obtains a second image, where the second image is an image of the photographed object collected based on the second exposure time, The second exposure time is an integer multiple of the first duration; based on the first image and the second image, a color conversion matrix and/or brightness parameters are obtained.
  • the color conversion matrix is used to adjust the color of the first image
  • the brightness parameters are used to adjust the color of the first image. Perform brightness adjustment on an image; perform first image processing on the first image based on the color conversion matrix and/or brightness parameters to obtain a third image, where the third image is an image with stripes and/or color cast image areas removed; display or save the third image Three images.
  • connection relationship between the modules shown in FIG. 1 is only a schematic illustration and does not constitute a limitation on the connection relationship between the modules of the electronic device 100 .
  • each module of the electronic device 100 may also adopt a combination of various connection methods in the above embodiments.
  • the wireless communication function of the electronic device 100 can be implemented through antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, modem processor, baseband processor and other components.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display 194 may be used to display images or videos.
  • display screen 194 may be used to display images or videos.
  • Display 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode (AMOLED), flexible Light-emitting diode (flex light-emitting diode, FLED), mini light-emitting diode (Mini LED), micro light-emitting diode (micro light-emitting diode, Micro LED), micro OLED (Micro OLED) or quantum dot light emitting Diodes (quantum dot light emitting diodes, QLED).
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera sensor through the camera, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can algorithmically optimize the noise, brightness and color of the image. ISP can also optimize parameters such as exposure and color temperature of the shooting scene.
  • the ISP may be provided in the camera 193.
  • a camera 193 (which may also be referred to as a lens) is used to capture still images or videos. It can be triggered by application instructions to realize the camera function, such as capturing images of any scene.
  • the camera can include imaging lenses, filters, image sensors and other components. The light emitted or reflected by the object enters the imaging lens, passes through the optical filter, and finally converges on the image sensor.
  • the imaging lens is mainly used to collect and image the light emitted or reflected by all objects in the camera angle (which can also be called the scene to be shot, the target scene, or the scene image that the user expects to shoot);
  • the filter is mainly used to To filter out excess light waves in light (such as light waves other than visible light, such as infrared);
  • the image sensor can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) ) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the image sensor is mainly used to photoelectrically convert the received optical signal into an electrical signal, and then transfer the electrical signal to the ISP to convert it into a digital image.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the digital signal processor is used to process digital signals. In addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3 and MPEG4.
  • MPEG moving picture experts group
  • MPEG2 MPEG2
  • MPEG3 MPEG3
  • MPEG4 MPEG4
  • the gyro sensor 180B may be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x-axis, y-axis, and z-axis
  • the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the angle at which the electronic device 100 shakes, and calculates the distance that the lens module needs to compensate based on the angle, so that the lens can offset the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally the x-axis, y-axis, and z-axis). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. The acceleration sensor 180E can also be used to identify the posture of the electronic device 100 as an input parameter for applications such as horizontal and vertical screen switching and pedometer.
  • distance sensor 180F is used to measure distance.
  • Electronic device 100 can measure distance via infrared or laser. In some embodiments, such as in a shooting scene, the electronic device 100 may utilize the distance sensor 180F to measure distance to achieve fast focusing.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touching.
  • fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement functions such as unlocking, accessing application locks, taking photos, and answering incoming calls.
  • the touch sensor 180K is also called a touch device.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen.
  • the touch screen is also called a touch screen.
  • the touch sensor 180K is used to detect a touch operation acted on or near the touch sensor 180K.
  • the touch sensor 180K may pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different position from the display screen 194 .
  • the hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is introduced below.
  • FIG. 2 is a schematic diagram of a software system of an electronic device provided by an embodiment of the present application.
  • the system architecture may include an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240 and a hardware layer 250.
  • Application layer 210 may include a camera application.
  • the application layer 210 may also include image gallery, calendar, call, map, navigation, WLAN, Bluetooth, audio, etc. music, video, SMS and other applications.
  • the application framework layer 220 provides an application programming interface (API) and programming framework for applications in the application layer; the application framework layer may include some predefined functions.
  • API application programming interface
  • the application framework layer 220 may include a camera access interface; the camera access interface may include camera management and camera devices.
  • the camera management can be used to provide an access interface for managing cameras; the camera device can be used to provide an interface for accessing the camera.
  • Hardware abstraction layer 230 is used to abstract hardware.
  • the hardware abstraction layer can include the camera abstraction layer and other hardware device abstraction layers; the camera abstraction layer can include camera device 1, camera device 2, etc.; the camera hardware abstraction layer can be connected to the camera algorithm library, and the camera hardware abstraction layer can call Algorithms from the camera algorithm library.
  • the camera algorithm library may include an image processing algorithm, which is used to execute the image processing method provided by the embodiment of the present application when the image processing algorithm is run.
  • the driver layer 240 is used to provide drivers for different hardware devices.
  • the driver layer may include camera device drivers.
  • Hardware layer 250 may include image sensors, image signal processors, and other hardware devices.
  • the reason for Luma banding is: for the shooting scene of 50HZ AC stroboscopic light source, if the exposure time of the electronic device is not an integer multiple of 10ms, the alternating current sine wave band generated by the stroboscopic light source in the shooting environment cannot be compared with the The exposure integration period cancels out, causing regular brightness stripes to appear in the collected images;
  • the reason for Chroma banding is that when the voltage of the stroboscopic light source changes by 10%, the color temperature of the light source will change greatly (for example, around 1000K );
  • the imaging of the image is related to the color temperature; when the voltage of the charged light source changes slightly, the color temperature will change greatly, causing a color cast problem in the image; therefore, if banding appears in the image, it will seriously affect the image quality. .
  • embodiments of the present application provide an image processing method and an electronic device; in the shooting environment where the electronic device is located, the light source is a stroboscopic light source. Since the electronic device needs to shoot moving objects and thereby reduce the exposure time, the appearance of the image will appear.
  • the color migration and brightness migration of the short exposure image are performed based on the normal exposure image, thereby removing the banding stripes and cast color cast in the short exposure image.
  • Color image area ensure that when collecting images of moving subjects at the moment of movement, the banding stripes and color cast image areas in the image are removed to improve image quality.
  • FIG. 3 is a schematic diagram of an application scenario of the image processing method provided by the embodiment of the present application.
  • the image processing method in the embodiment of the present application can be applied to the field of photography; for example, the image processing method of the present application can be applied to photographing moving objects in a stroboscopic light source shooting environment (for example, an indoor shooting environment); where ,The moving object may refer to a moving user, a ,moving object, or an image played in a video (e.g. Such as movies) etc.
  • a stroboscopic light source shooting environment for example, an indoor shooting environment
  • the moving object may refer to a moving user, a ,moving object, or an image played in a video (e.g. Such as movies) etc.
  • the photographed object 270 moves in an indoor place with a strobe light source
  • the indoor lighting device eg, an electric lamp 260
  • the electronic device 100 runs a camera application
  • the electronic device 100 collects an image including an image of the photographed object 270
  • the preview image including the shooting object 270 can be displayed in real time on the display screen; when the user is viewing the preview image on the display screen of the electronic device 100, if he wants to capture the image of the shooting object 270 at the moment of movement, he can click the shooting control on the shooting interface.
  • the shooting control of the electronic device 100 is triggered, the electronic device 100 can capture images of the moment when the shooting object 270 is moving.
  • the photographed object 270 is used as the user for illustration; the photographed object may also be an animal, a vehicle, a robot, etc.
  • the photographed object may also be an animal, a vehicle, a robot, etc.
  • the subject is moving in a stroboscopic light shooting environment, it can be the user playing badminton, table tennis, practicing yoga, etc. indoors.
  • the shooting object in the shooting perspective of the electronic device may be one user or multiple users.
  • the lighting equipment can work under the driving of alternating current.
  • the working principle of the fluorescent lamp is that under the action of high-voltage current, the inert gas in the lamp tube is discharged, and the electrons generated by the discharge reach the fluorescent tube of the fluorescent lamp, causing the fluorescent lamp to emit light.
  • the working voltage of a fluorescent lamp Take the working voltage of a fluorescent lamp as an example of 50Hz alternating current. When the fluorescent lamp is driven by alternating current, the 50Hz alternating current causes the fluorescent lamp to strobe 100 times in 1 second.
  • the working voltage of the fluorescent lamp is 60Hz alternating current.
  • the 60Hz alternating current causes the fluorescent lamp to strobe 120 times per second.
  • the electronic device is in the camera mode
  • the display screen of the electronic device can display a preview image
  • the electronic device can generate a snapshot image (which may also be called shooting image).
  • the stroboscopic light source in the shooting scene will affect the image collected by the image sensor, which may cause banding stripes to appear in the image in the preview image stream, and/or cause the image collected by the electronic device to Banding stripes appear; for example, as shown in Figure 4.
  • the image processing method of the embodiment of the present application can be applied to the photographing mode of the camera application; through the image processing method provided by the embodiment of the present application, when the electronic device is in a shooting environment with a stroboscopic light source, when photographing moving objects, the image can be removed. Banding stripes and color cast image areas improve the color accuracy and brightness accuracy of the image; improve image quality.
  • the image processing provided by the embodiment of the present application can be performed Method, obtain an image with banding removed; removing image banding may refer to removing local color cast problems in the image (for example, Chroma banding), and/or removing alternating light and dark stripes in the image (for example, Luma banding).
  • the image processing method in the embodiment of the present application can also be applied to the field of video recording, video calling, or other image processing fields.
  • video call scenarios may include but are not limited to the following scenarios:
  • Figure 5 is a schematic flow chart of an image processing method provided by an embodiment of the present application.
  • the method 300 may be executed by the electronic device shown in FIG. 1; the method 300 includes steps S310 to S350, and steps S310 to S350 will be described in detail below respectively.
  • the light source used in the shooting environment of the electronic device in the method shown in Figure 5 is a stroboscopic light source; since the light source of the solution of the present application is a stroboscopic light source, the exposure time when the electronic device acquires the image does not satisfy (1000/ When the number of times the light source turns on and off per second) is an integer multiple of milliseconds, banding stripes will appear in the image.
  • Step S310 Run the camera application in the electronic device.
  • the user can instruct the electronic device to run the camera application by clicking the icon of the "Camera" application; or, when the electronic device is in the lock screen state, the user can slide to the right on the display screen of the electronic device.
  • the electronic device is instructed to run the camera application.
  • the electronic device is in a locked screen state, and the lock screen interface includes an icon of the camera application, and the user instructs the electronic device to run the camera application by clicking on the icon of the camera application.
  • the application has the permission to call the camera application; the user can instruct the electronic device to run the camera application by clicking the corresponding control.
  • the electronic device is running an instant messaging application, the user can instruct the electronic device to run the camera application by selecting the control of the camera function.
  • running the camera application may refer to launching the camera application.
  • Step S320 Display a first image, where the first image includes stripes and/or color cast image areas.
  • the first image is an image of the photographed object collected based on the first exposure time
  • the photographed object is a moving object
  • the first exposure time is less than the first duration
  • the first image may include stripes, that is, the first image may include Luma banding.
  • the first image may include stripes and color cast image areas, that is, the first image may include Luma banding and Chroma banding.
  • Luma banding refers to the banding stripes produced by shortening the exposure time; among them, Luma banding only changes light and dark, without color change; the reason for Luma banding is that the alternating current sinusoidal band generated by the stroboscopic light source in the shooting environment cannot match the exposure integration period They cancel each other out, causing regular stripes of brightness to appear in the collected images.
  • Chroma banding refers to the color cast of a local area in an image.
  • the local area in the image does not match the overall color of the image, and a red, green or blue color cast appears.
  • the reason for Chroma banding is that the energy changes accompanied by the change of color temperature, resulting in color cast problems in the image. For example, if different colors appear in the light and dark stripes in Luma banding, Chroma banding will appear in the image.
  • stripes can refer to global banding, that is, alternating light and dark stripes will appear in the entire area of the image; color cast image areas can refer to local banding, that is, local color cast areas appear in the image, and the color cast areas are consistent with the overall image. The colors don't match; for example, there could be a red, green or blue color cast.
  • the exposure integration period cannot offset the banding; therefore, banding stripes may appear in the first image; in addition, when the voltage of the stroboscopic light source changes by 10%, the light source The color temperature will change greatly (for example, about 1000K); the imaging of the image is related to the color temperature; because the voltage of the charged light source changes slightly, the color temperature will change greatly; resulting in color cast problems in the image.
  • the first image may be an image as shown in (a) of Figure 10 .
  • the image includes alternating light and dark stripes and a color cast image area 704; the image area 704 may refer to red, green, blue or other colors. Color cast image area.
  • the first image may refer to a preview image in a preview interface in the electronic device; wherein the preview interface may refer to a photo preview interface or a video recording preview interface.
  • Step S330 A first operation is detected, and the first operation instructs the electronic device to take photos or videos.
  • the first operation may refer to the operation of clicking the photo-taking control, as shown in (b) of Figure 11 .
  • the first operation may refer to the operation of clicking the video recording control. It should be understood that the above is an example of the first operation; the operation can also be instructed by voice, or the electronic device can be instructed to take photos or videos by other operations; this application does not impose any limitation on this.
  • the moving object may refer to a moving user, a moving object, or an image played in a video (for example, a movie).
  • the electronic device includes an image signal processor, and the first image may be an image output by the image signal processor.
  • the first image may be an image output by the image signal processor; since the first image is an image output by the signal processor, the image signal processor may perform denoising processing on the short-exposure Raw image, therefore although the first image includes banding stripes, the image details in the first image are richer.
  • Step S340 In response to the first operation, acquire the second image.
  • the second image is an image of the photographed object collected based on the second exposure time
  • the second exposure time is an integer multiple of the first duration
  • the exposure integration period can offset the banding; therefore, banding stripes will not appear in the second image.
  • the exposure time of the second image may be an integer multiple of 10 ms.
  • each row of pixels in the second image receives the same energy, so there will be no energy fluctuations or color cast problems in the second image; Therefore, stripes and color cast image areas will not appear in the second image, that is, there is no Luma banding and Chroma banding in the second image.
  • the size of the first duration is obtained based on the number of times the strobe light source turns on and off per second.
  • the number of times the stroboscopic light source turns on and off per second is related to the frequency of the operating voltage of the stroboscopic light source.
  • the first duration 1000/the number of times the strobe light source turns on and off per second.
  • the frequency of the working voltage of the stroboscopic light source is 50HZ, that is, the number of times the stroboscopic light source lights up and darkens per second is 100 times, then the first duration is 10ms; at this time, the exposure time of the first image is less than 10ms, and the first duration is 10ms.
  • the exposure time of the second image is an integral multiple of 10ms.
  • the frequency of the working voltage of the stroboscopic light source is 60HZ, that is, the number of times the stroboscopic light source lights up and darkens per second is 120 times, then the first duration is 8.3ms; at this time, the exposure time of the first image is less than 8.3ms , the exposure time of the second image is an integral multiple of 8.3ms.
  • the second image is an image obtained by performing third image processing on the Raw image collected by the electronic device, and the third image processing includes color space conversion processing.
  • the electronic device can collect a normally exposed Raw image, and perform color space conversion processing on the normally exposed Raw image to obtain a second image.
  • the second image may be an image in RGB color space, or an image in YUV color space.
  • the second image is a normal exposure image; for example, the second image can be an image obtained by downsampling and color space conversion processing of a normally exposed Raw image; because in the present application, the color information and brightness information in the second image are needed to migrate the first image, so the requirements for the detailed information in the second image are lower, that is, the second image does not need to be processed by the image signal processor. .
  • the method before acquiring the second image, the method further includes: detecting the shooting scene in which the electronic device is located, detecting the moving object; and detecting the presence of stripes and/or color cast image areas in the first image.
  • a detection module may be included in the electronic device, and the detection module may detect the photographed object; when the photographed object includes a moving object, the detection module outputs an identifier, which may indicate that the photographed scene includes a moving object.
  • an anti-flicker sensor (Flicker Sensor) in an electronic device can be used to detect whether there are stripes; wherein the anti-flicker sensor (Flicker Sensor) can be a sensor that samples ambient light.
  • the electronic device when there are moving objects in the shooting scene and the collected images include banding, the electronic device can be triggered to execute the image processing method provided by the embodiments of the present application, that is, the method of removing the banding in the image.
  • the electronic device collects images of the moving objects at the moment of movement; because the shooting object is moving, the electronic device usually needs to reduce the motion blur in the image; in order to reduce the motion blur in the image, Usually electronic equipment can shorten the exposure time and increase the sensitivity value; however, for the shooting environment of strobe light source, reducing the exposure time will cause banding in the image, so the banding in the image needs to be processed.
  • Step S350 Obtain the color conversion matrix and/or brightness parameters based on the first image and the second image.
  • the color conversion matrix is used to adjust the color of the first image
  • the brightness parameter is used to adjust the brightness of the first image
  • the second image is an image with a normal exposure time, there is no banding in the second image; based on migrating the color and brightness of the second image to the first image, the color conversion matrix and Brightness parameter.
  • the above method also includes:
  • the image processing model is a convolutional neural network; for example, the image processing model can be HDRnet.
  • the training method of the image processing model please refer to the relevant description shown in subsequent Figure 8.
  • the color conversion matrix and/or brightness parameters are obtained based on the bilateral grid output by the image processing model; when performing color migration and brightness migration on the first image through the image processing model, the first image and the second image
  • the deviation area with a large difference in image content between the two images, that is, the image processing model can identify the second image and the first image.
  • the ghost area between one image; therefore, the color conversion matrix and/or brightness parameters obtained through the image processing model will not introduce the ghost area when performing color adjustment and/or brightness adjustment on the first image, thereby improving the image quality. quality.
  • registration processing and smoothing processing can be performed on the first image and the second image; and a pixel-by-pixel difference is performed on the registered first image and the second image to obtain the color conversion matrix and/or brightness. parameter.
  • Step S360 Perform first image processing on the first image based on the color conversion matrix and/or brightness parameters to obtain a third image.
  • the third image is an image with stripes and/or color cast image areas removed.
  • the color of the first image can be adjusted based on the color conversion matrix, and the color cast image area in the first image can be removed; the brightness of the first image can be adjusted based on the brightness parameters, and the banding stripes in the first image can be removed; for example, As shown in (c) in Figure 11, the image area 704 in the image can be removed based on the color conversion matrix; the light and dark stripes in the image can be removed based on the brightness parameter.
  • the first image may be as shown in (d) in Figure 11
  • the third image may be as shown in Figure 12.
  • a third image including:
  • the first image is interpolated based on the color conversion matrix and/or the brightness parameter to obtain a third image.
  • the color conversion matrix and/or the brightness parameter may be multiplied by the matrix of the first image to obtain the third image.
  • a third image including:
  • second image processing is performed on the processed image to obtain a third image
  • the second image processing is a color processing algorithm in the first color space.
  • the first color space may be an RGB color space; that is, the first image processing may be performed in the RGB color space.
  • the first image processing may be performed first; and then other algorithms related to color processing in the RGB color space may be performed.
  • the first image when the image signal processor processes the Raw image collected by the electronic device, when executing the RGB domain algorithm, the first image may be processed based on the color conversion matrix and/or the brightness parameter.
  • the first image processing then executes other RGB domain algorithms; because the first image processing can obtain an image with banding stripes and/or color cast image areas removed; in other words, the first image processing can correct the color and brightness of the image;
  • the accuracy of color and brightness is high, then executing other RGB domain algorithms can ensure that the image without banding is processed by RGB domain algorithm on the basis of high accuracy of color and brightness of the image, thereby improving the image. quality.
  • the first image processing can be performed first; and then other algorithms in other RGB color spaces can be performed.
  • the first color space may be the YUV color space; that is, the first image processing may be performed in the YUV color space.
  • the image processing model is trained through the following methods:
  • the sample data includes a first sample image, a second sample image and a third sample image.
  • the second sample image includes the image content, stripes and/or color cast image areas of the first sample image.
  • the third sample has the same image content as the first sample image, and the image quality of the third sample image is higher than the image quality of the first sample image;
  • the image processing model to be trained is trained based on the difference between the predicted image and the third sample image to obtain the image processing model.
  • Step S370 Display or save the third image.
  • the application has the permission to call the camera application; when other applications call the camera application to obtain images, the third image can be displayed.
  • the electronic device runs the shooting mode (or video recording mode) of the camera application, after the electronic device detects an operation instructing taking pictures (or an operation instructing video recording), the third image may be saved.
  • the electronic device when the light source of the shooting environment in which the electronic device is located is a stroboscopic light source, the electronic device needs to shoot moving objects and therefore reduces the exposure time, resulting in banding stripes and/or color cast image areas in the image. ;
  • the color conversion matrix and/or brightness parameters are obtained by performing color migration and brightness migration on the short exposure image based on the normal exposure image; based on the color conversion matrix, local image areas in the short exposure image can be removed color cast problem; banding stripes in short-exposure images can be removed based on brightness parameters; ensure that when collecting images of moving objects at the moment of movement, the banding in the image is removed (for example, removing banding stripes and color cast image areas) to improve the image quality.
  • the color conversion matrix and/or brightness parameters are obtained based on the bilateral grid output by the image processing model; when the first image is subjected to color migration and brightness migration through the image processing model, the first image can be identified The difference in image content between the second image and the second image; therefore, the color conversion matrix and/or brightness parameters obtained through the image processing model will not introduce ghost areas when color adjustment and/or brightness adjustment are performed on the first image , improve image quality.
  • FIG. 6 is a schematic flow chart of a processing flow of an image signal processor provided by an embodiment of the present application.
  • the method 400 may be executed by the electronic device shown in FIG. 1; the method 400 includes steps S410 to S460, and steps S410 to S460 will be described in detail below respectively.
  • Step S410 The image sensor collects Raw images.
  • a Raw image refers to an image in a Raw color space; a Raw image may refer to original image data output from an image sensor without interpolation and color mixing.
  • Step S420 Perform first color space conversion processing on the Raw image.
  • a Raw image can be converted into an RGB color space to obtain an RGB image.
  • Step S430 RGB domain algorithm processing.
  • the image processing method provided in the embodiment of the present application may be the first algorithm executed in the RGB domain algorithm processing; for example, performing RGB domain algorithm processing on the RGB image in step S430 includes: first performing image strip removal processing on the RGB image to obtain an RGB image with banding removed; and then performing other RGB domain algorithm processing on the RGB image with banding removed.
  • the image processing method provided by the embodiment of the present application may be the first algorithm executed among the color-related algorithms in RGB domain algorithm processing.
  • the image strip removal process can be performed first, so that the color and brightness of the image can be adjusted.
  • the accuracy is high; when the accuracy of color and brightness is high, executing other RGB domain algorithms can ensure that the image with removed image banding is processed in the RGB domain on the basis of high accuracy of color and brightness of the image. Algorithmic processing to improve image quality.
  • performing RGB domain algorithm processing on the RGB image in step S430 includes: color-related algorithm processing and other RGB domain algorithm processing; when performing color-related algorithm processing, image banding removal processing is first performed to obtain an RGB image with banding removed; and then Perform other color-related algorithmic processing on the banded RGB image.
  • the image processing method provided by the embodiment of the present application before performing the color-related algorithm processing, can be performed first, that is, the image banding removal process is performed; by removing the image banding process, an image with banding removed can be obtained. , so that the accuracy of the color and brightness of the image is high; when the accuracy of the color and brightness is high, the color-related algorithm processing is performed to ensure that the accuracy of the color and brightness of the image is high. , perform other color-related algorithm processing on the banding-removed image to improve image quality.
  • the image processing method provided by the embodiment of the present application may not be limited to the execution time in the RGB domain algorithm.
  • the implementation of the image strip removal process may refer to steps S510 to S570 shown in FIG. 7 .
  • Step S440 Perform a second color space conversion process on the processed RGB image.
  • the second color space may refer to the YUV color space
  • the processed RGB image may be converted to the YUV color space
  • YUV domain algorithm processing may be performed.
  • Step S450 YUV domain algorithm processing.
  • YUV domain algorithm processing includes but is not limited to:
  • Brightness noise reduction processing edge enhancement processing, contrast processing, etc.
  • Step S460 Output the processed image.
  • the RGB domain algorithm processing includes image strip removal processing as an example; optionally, the image striping removal processing can also be performed in the YUV domain; in other words, it can also be performed in the YUV domain.
  • image banding removal processing is performed.
  • image band removal processing is added through algorithm processing in the RGB domain; the light source that can be used in the shooting environment where the electronic device is located is a stroboscopic light source. Since the electronic device needs to shoot moving objects, thereby reducing the exposure time, resulting in image When banding occurs in the image, remove the banding in the short-exposure image; ensure that when collecting images of moving subjects at the moment of movement, remove the banding stripes and color cast image areas in the image to improve image quality.
  • FIG. 7 is a schematic flow chart of an image processing method provided by an embodiment of the present application.
  • the method 500 may be executed by the electronic device shown in FIG. 1; the method 500 includes steps S501 to S508, and steps S501 to S508 will be described in detail below respectively.
  • the light source of the shooting environment where the electronic device is located is a stroboscopic light source.
  • the method 500 shown in FIG. 7 may refer to the related algorithm of image band removal processing shown in FIG. 6 .
  • the method 500 shown in FIG. 7 may be performed first in the RGB domain algorithm processing shown in FIG. 6 algorithm.
  • the method shown in Figure 7 when the image signal processor processes the Raw image collected by the electronic device, when executing the RGB domain algorithm, the method shown in Figure 7 can be executed first and then other RGB domain algorithms are executed; because The image processing method shown in Figure 7 can obtain an image with banding removed; in other words, the method shown in Figure 7 can correct the color and brightness of the image; when the accuracy of the color and brightness is high, then Executing other RGB domain algorithms can ensure that the banding-removed image is processed by RGB domain algorithms on the basis of high accuracy in the color and brightness of the image, thereby improving image quality.
  • the method 500 shown in FIG. 7 may be executed before all color-related algorithms in the RGB domain algorithm processing shown in FIG. 6 .
  • the method 500 shown in Figure 7 before executing the color-related algorithm, can be executed first; through the image processing method shown in Figure 7, an image with banding removed can be output; in other words, Figure The method shown in 7 can correct the color and brightness of the image; when the accuracy of the color and brightness is high, executing other color-related algorithms can ensure that the accuracy of the color and brightness of the image is high. , perform other color processing on the image with banding removed to improve image quality.
  • the method 500 shown in FIG. 7 may not be limited to execution in the RGB domain algorithm shown in FIG. 6 .
  • the method 500 shown in Figure 7 can be executed in the YUV domain algorithm shown in Figure 6; this application does not impose any limitation on this.
  • Step S501 Run the camera application.
  • the user can instruct the electronic device to run the camera application by clicking the icon of the "Camera" application; or, when the electronic device is in the lock screen state, the user can slide to the right on the display screen of the electronic device.
  • the electronic device is instructed to run the camera application.
  • the electronic device is in a locked screen state, and the lock screen interface includes an icon of the camera application, and the user instructs the electronic device to run the camera application by clicking on the icon of the camera application.
  • the application has the permission to call the camera application; the user can instruct the electronic device to run the camera application by clicking the corresponding control.
  • the electronic device is running an instant messaging application, the user can instruct the electronic device to run the camera application by selecting the control of the camera function.
  • running the camera application may refer to launching the camera application.
  • Step S502 It is detected that the photographed object includes a moving object.
  • a detection module may be included in the electronic device, and the detection module may detect the photographed object; when the photographed object includes a moving object, the detection module outputs an identifier, which may indicate that the photographed scene includes a moving object.
  • the moving object may refer to a moving user, a moving object, or an image played in a video, etc.
  • Step S503 It is detected that image banding exists.
  • the image banding may include Luma banding and Chroma banding; where Luma banding refers to the banding stripes caused by shortening the exposure time; where Luma banding only has light and dark transitions, without color changes; for example, as Stripes shown in (a) in Figure 10; Chroma banding refers to an image color cast in a local area of the image, and the local area in the image does not match the overall color of the image; for example, the image color cast can include the appearance of red, green or blue Color cast, etc.; for example, image area 704 as shown in (a) in Figure 10 .
  • an anti-flicker sensor in an electronic device can be used to detect whether there is an image Striping; wherein, the anti-flicker sensor (Flicker Sensor) can be a sensor that samples ambient light.
  • Step S504 Obtain the ISP-processed short exposure image (an example of the first image).
  • the short-exposure image output after ISP processing may refer to an RGB image obtained by using the short-exposure Raw image in the multi-frame Raw image as a reference.
  • the exposure time of the short exposure image usually does not satisfy an integer multiple of 10 ms; therefore, banding stripes exist in the short exposure image.
  • the color temperature of the light source will change greatly (for example, about 1000K); the imaging of the image is related to the color temperature, because when the voltage of the stroboscopic light source changes slightly, Large changes in color temperature can cause color casts in images; therefore, there may also be areas of color casts in short-exposure images.
  • step S503 can be performed first and then step S504; or step S504 can be performed first and then step S503; or step S503 and step S504 can be performed simultaneously; this application does not make any changes to the order of step S503 and step S504. Any limitations.
  • a full-size short-exposure Raw image can be obtained, and the short-exposure Raw image is subjected to ISP processing to obtain a short-exposure image (for example, a 512*512 size RGB image).
  • the resolution of the obtained full-size short exposure Raw image can be 4096*2160.
  • Step S505 A first operation indicating taking a photo is detected.
  • the first operation may be an operation of clicking the photo taking control 705 .
  • Step S506 In response to the first operation, acquire a normal exposure image (an example of a second image).
  • a normal exposure image may refer to a Raw image with an exposure time that is an integer multiple of 10 ms; that is, when collecting a normal exposure image, the exposure time of the electronic device is an integer of 10 ms. times.
  • the exposure time of a normal exposure image is an integer multiple of 10ms; therefore, there are no banding stripes in the normal exposure image; that is, there is no Luma banding in the image; in addition, the reason for Chroma banding is that while the energy changes, it is accompanied by Changes in color temperature lead to color cast problems in the image; for example, if the light and dark stripes in Luma banding show different colors, Chroma banding will appear in the image; for example, the position of the dark stripes will appear reddish, and the position of the bright stripes will appear blue.
  • each row of pixels receives the same energy, so there will be no energy fluctuations in the normal exposure image, and there will be no color cast problem; therefore, Luma banding and Chroma banding will not appear in the normal exposure image .
  • the electronic device can collect multiple frames of Raw images, and the multiple frames of Raw images include S A N B ; where S A represents the short exposure image of frame A, and NB represents the normal exposure of frame B. Image; where A is an integer greater than or equal to 1, and B is an integer greater than or equal to 1; the exposure time of a short exposure image is less than 10ms; the exposure time of a normal exposure image is an integer multiple of 10ms.
  • the electronic device can collect multiple frames of Raw images, and the multiple frames of Raw images include SAN BLC; where S A represents the A frame short exposure image, and NB represents the B frame Normal exposure image, L C represents C frame long exposure image; where A is an integer greater than or equal to 1, B is an integer greater than or equal to 1, C is an integer greater than or equal to 0; the exposure time of the short exposure image is less than 10ms ; The exposure time of a normal exposure image is 10ms An integer multiple of; the exposure time of a long exposure image is greater than the exposure time of a normal exposure image.
  • a multi-frame Raw image may refer to a 7-frame image of S 4 N 2 L; that is, a Raw image of SSSSNNL, where 4 frames of S are short-exposure Raw images in the preview frame; 2 frames of N represent that the exposure value is greater than or equal to 4 frames of short exposure images, the exposure time is a normal exposure image that is an integral multiple of 10ms; 1 frame of L represents a long exposure image.
  • the RGB image can be obtained by performing color space conversion processing (for example, demosaic processing) on the first frame of normally exposed Raw image among the collected one or multiple frames of normally exposed Raw images.
  • color space conversion processing for example, demosaic processing
  • a normal exposure image can be obtained based on the normally exposed Raw image of the first frame of the multi-frame image; because the time difference between the normally exposed Raw image of the first frame and the short-exposure Raw image is short; therefore, The normal exposure image is obtained based on the normally exposed Raw image of the first frame, and the short exposure image is subjected to color migration processing and brightness migration processing, which can avoid the introduction of motion ghosts to a certain extent.
  • Step S507 Input the short exposure image and the normal exposure image into the image processing model for processing to obtain a bilateral grid.
  • the bilateral grid refers to a data structure; in the embodiment of the present application, the bilateral grid may be a grid matrix; the grid matrix includes a color correction conversion matrix and a brightness parameter (for example, brightness order ), where the color correction conversion matrix includes red pixel gain (R gain), green pixel gain (G gain) and blue pixel gain (B gain); based on the data in the bilateral grid, the color and brightness of the normal exposure image can be Migrate to short exposure images to remove banding stripes and color cast image areas in short exposure images.
  • a brightness parameter for example, brightness order
  • the bilateral grid can be a 32*32*8*9 grid matrix; 32*32 can represent the width and height, 8 can represent the brightness order; 9 can represent the color correction conversion matrix, that is, the function A 3*3 matrix for each RGB value.
  • the network structure of the image processing model can be a convolutional neural network; for example, the image processing model can be HDR Net; for the training method of the image processing model, please refer to the relevant description shown in Figure 8.
  • Step S508 Process the short-exposure image based on a bilateral grid to obtain an image with stripes and/or local color cast areas removed (an example of a third image).
  • color migration processing can be performed on the short exposure image based on the color conversion matrix in the bilateral grid to remove the color cast image area in the short exposure image; and/or the short exposure image can be processed based on the brightness parameters in the bilateral grid. Brightness migration processing to remove banding stripes in short-exposure images.
  • interpolation processing can be performed based on the data in the bilateral grid and the short exposure image to obtain an image without banding.
  • step S506 is to obtain a normal exposure image after detecting the first operation indicating taking a photo; optionally, in one implementation, when the electronic device is in the preview state, that is, the electronic device does not detect Before the first operation, electronic equipment can use single-frame progressive HDR (Stagger HDR) technology to obtain short-exposure images and normal-exposure images.
  • single-frame progressive HDR Stagger HDR
  • stagger HDR refers to the technology of "long and short frame” shooting with “row” as the output unit. That is, two exposures are used in sequence to obtain the normal exposure image and the short exposure image.
  • the exposure time of the short exposure image is less than 10 ms and the exposure time of the normal exposure image is an integer multiple of 10 ms. This application does not impose any limitation on this.
  • the short exposure image refers to the image with an exposure time less than 10ms
  • the normal exposure image refers to the exposure time Images whose time is an integer multiple of 10ms.
  • the operating voltage frequency of the stroboscopic light source in the shooting environment is 60HZ, that is, the stroboscopic light source lights up and dims 120 times per second.
  • the short exposure image refers to the image with an exposure time of less than 8.3ms; the normal exposure image refers to the exposure time An image whose time is an integer multiple of 8.3ms.
  • steps S501 to S508 shown in Figure 7 are exemplified by removing banding stripes and color cast image areas in the RGB color space; the above steps of removing banding stripes and color cast image areas can also be performed in the YUV color space. ; If executed in YUV color space, short exposure images and normal exposure images can be YUV images.
  • steps S501 to S508 shown in FIG. 7 are described as an example using a scene in which an electronic device takes a photo; the method shown in FIG. 7 can also be applied to a scene in which an electronic device records a video.
  • the above is an example of obtaining a bilateral grid based on an image processing model; in the embodiment of the present application, registration processing and smoothing processing can be performed on the first image and the second image; A pixel-by-pixel difference is performed between the first image and the second image to obtain a color conversion matrix and/or brightness parameters.
  • the electronic device when the light source of the shooting environment in which the electronic device is located is a stroboscopic light source, the electronic device needs to shoot moving objects and therefore reduces the exposure time, resulting in banding stripes and/or color cast image areas in the image. ;
  • the color conversion matrix and/or brightness parameters are obtained by performing color migration and brightness migration on the short exposure image based on the normal exposure image; based on the color conversion matrix, local image areas in the short exposure image can be removed color cast problem; banding stripes in short-exposure images can be removed based on brightness parameters; ensure that when collecting images of moving objects at the moment of movement, the banding in the image is removed (for example, removing banding stripes and color cast image areas) to improve the image quality.
  • the color conversion matrix and/or brightness parameters are obtained based on the bilateral grid output by the image processing model; when performing color migration and brightness migration on the first image through the image processing model, the first image can be identified The difference in image content between the second image and the second image; therefore, the color conversion matrix and/or brightness parameters obtained through the image processing model will not introduce ghost areas when color adjustment and/or brightness adjustment are performed on the first image , improve image quality.
  • Figure 8 is a schematic flow chart of an image processing model training method provided by an embodiment of the present application.
  • the method 600 may be executed by the electronic device shown in FIG. 1; the method 600 includes steps S610 to S650, and steps S610 to S650 will be described in detail below respectively.
  • Step S610 Obtain training data.
  • the training data includes a first sample image, a second sample image and a third sample image; wherein, the first sample image is an image without banding; the second sample image is an image after adding banding to the first image. ;
  • the third sample image is a banding-free image and the image quality of the third sample image is higher than the image quality of the first sample image.
  • the third sample image can be an RGB image processed by ISP without banding;
  • the first sample image can be an RGB image obtained by color space conversion based on a normally exposed sample Raw image; therefore, the third sample image The image quality is higher than the image quality of the first sample image.
  • Step S620 Input the first sample image and the second sample image to the image processing model to be trained to obtain a predicted bilateral grid.
  • the image processing model is used to learn the color difference and brightness difference between images without banding and images with banding in the same shooting scene; the purpose is to use the output predicted bilateral grid to make the image with banding The color and brightness are transferred to the banding-free image.
  • the network structure of the image processing model can be VGG network, or HDR Net.
  • Step S630 Interpolate the first sample image based on the predicted bilateral grid to obtain a predicted image.
  • the data in the bilateral grid and the data of the first sample image can be multiplied to obtain the predicted image.
  • Step S640 Based on the difference between the predicted image and the third sample image, update the parameters of the image processing model to obtain the trained image processing model.
  • the difference between each pixel point in the predicted image and the third sample image can be calculated, and the image processing model to be trained is trained through the back propagation algorithm, so that the loss function of the image processing model to be trained converges, and the training is obtained The final image processing model.
  • the Chroma Banding image processing model can include 8 parameters; the 8 parameters are A*3, a*1, b*1, c*3; where A represents Amplitude; a represents frequency; b represents initial phase; c represents offset term; A*3 represents the amplitude corresponding to RGB pixels respectively; c*3 represents the offset term corresponding to RGB pixels respectively.
  • the image processing model can be used to obtain a bilateral grid that transfers the color and brightness of the image frame obtained with normal exposure time or long exposure time to the image frame with short exposure time; when the alternating current is 50HZ stroboscopic light In the shooting scene of the source, since the normal exposure time or long exposure time is an integer multiple of 10ms, there are usually no image stripes or color cast image areas in long exposure images and normal exposure images; based on the bilateral grid output by the image processing model Obtain the color conversion matrix and/or brightness parameters; when performing color migration and brightness migration on the first image through the image processing model, the difference in image content between the first image and the second image can be identified; therefore, obtain through the image processing model The color conversion matrix and/or brightness parameters are obtained, and ghost areas will not be introduced when performing color adjustment and/or brightness adjustment on the first image, thereby improving image quality.
  • FIG. 9 is a schematic interface diagram of an electronic device provided by an embodiment of the present application.
  • the preview image displayed in the preview interface displayed on the electronic device includes banding stripes and/or color cast areas; after the electronic device detects that the user clicks on the control, it can Execute the image processing method provided by the embodiment of the present application, that is, perform the processing of removing image strips.
  • the electronic device collects the image; the image is an image with banding stripes and/or color cast areas removed, which is a removed image. The output image after strip processing.
  • the image processing method provided by the embodiment of the present application is executed.
  • the graphical user interface (GUI) shown in (a) in Figure 9 is the desktop 701 of the electronic device; the electronic device detects that the user clicks on the camera application on the desktop 701 control 702, as shown in (b) in Figure 9; after the electronic device detects that the user clicks the control 702 of the camera application on the desktop 701, the electronic device runs the camera application; for example, as (a) in Figure 10 ), the electronic device can display a photo preview interface; the photo preview interface includes a preview image and control 703, where the preview image includes The light and dark stripes and image area 704; the image area 704 can be red, green, blue or other colors; the electronic device detects the user's operation of clicking the control 703, as shown in (b) in Figure 10; when the electronic device detects After the user clicks on the control 703, the electronic device can execute the image processing method provided by the embodiment of the present application and display a preview interface as shown in (a) of Figure 11; the preview interface includes a camera control 705; the electronic
  • the electronic device After the electronic device detects the operation of clicking the control 709 as shown in (d) of Figure 13, it executes the image processing method provided by the embodiment of the present application.
  • a preview interface as shown in (a) of Figure 13 can be displayed; the preview interface includes a preview image and controls 707, wherein the preview image includes alternating light and dark stripes and Image area 704, the image area 704 can be red, green, blue or other colors; the electronic device detects the user's click on the control 707, as shown in (b) of Figure 13; when the electronic device detects the user's click on the control 707
  • the setting interface is displayed, as shown in (c) in Figure 13; the setting interface includes a control 709 for removing image strips; the electronic device detects the user's operation of clicking the control 709 for removing image strips, as shown in Figure 13 As shown in (d); after the electronic device detects that the user clicks on the control 709 for removing image strips, the image processing method provided by the embodiment of the present application is executed.
  • FIG 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 800 includes a processing module 810 and a display module 820 .
  • the light source of the shooting environment where the electronic device is located is a stroboscopic light source.
  • the processing module 810 is used to run the camera application program of the electronic device; the display module 820 is used to display a first image, the first image is an image of a photographed object collected based on the first exposure time, and the photographed object is Moving object, the first exposure time is less than the first duration, the first image includes stripes and/or color cast image areas; the processing module 810 is also used to detect a first operation, the first operation indicates that the The electronic device shoots or records video; in response to the first operation, obtain a second image, the second image is an image of the photographed object collected based on a second exposure time, the second exposure time is the first An integer multiple of the duration; based on the first image and the second image, a color conversion matrix and/or a brightness parameter are obtained, the color conversion matrix is used to perform color adjustment on the first image, and the brightness parameter is Performing brightness adjustment on the first image; performing first image processing on the first image based on the color conversion matrix and/or brightness parameters to obtain a third image Image, the third image is an image with the
  • processing module 810 is also used to:
  • the bilateral grid data includes the color conversion matrix and/or the brightness parameter, and the size of the first image is the same as the size of the second image.
  • processing module 810 is specifically used to:
  • the first image is interpolated based on the color conversion matrix and/or the brightness parameter to obtain the third image.
  • processing module 810 is specifically used to:
  • the first color space perform the first image processing on the first image based on the color conversion matrix and/or the brightness parameter to obtain a processed image
  • second image processing is performed on the processed image to obtain the third image, and the second image processing is a color processing algorithm in the first color space.
  • the processing module 810 is further configured to: detect the shooting scene in which the electronic device is located, detect the moving object; and detect the existence of the stripes and /or color cast image area.
  • the size of the first duration is obtained based on the number of times the strobe light source turns on and off per second.
  • the first duration 1000/the number of times the strobe light source turns on and off per second.
  • the number of times the stroboscopic light source turns on and off per second is related to the frequency of the operating voltage of the stroboscopic light source.
  • the image processing model is a convolutional neural network.
  • the image processing model is trained by the following method:
  • the sample data includes a first sample image, a second sample image and a third sample image
  • the second sample image includes the image content, stripes and/or color cast of the first sample image Image area
  • the third sample image and the first sample image have the same image content
  • the image quality of the third sample image is higher than the image quality of the first sample image
  • the image processing model to be trained is trained based on the difference between the predicted image and the third sample image to obtain the image processing model.
  • the electronic device includes an image signal processor, and the first image is an image output by the image signal processor.
  • the second image is an image obtained by performing third image processing on the Raw image collected by the electronic device, and the third image processing includes color space conversion processing.
  • module here can It is implemented in the form of software and/or hardware, and there is no specific limitation on this.
  • a “module” may be a software program, a hardware circuit, or a combination of both that implements the above functions.
  • the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor, or a group processor) for executing one or more software or firmware programs. etc.) and memory, merged logic circuitry, and/or other suitable components to support the described functionality.
  • ASIC application specific integrated circuit
  • processor such as a shared processor, a dedicated processor, or a group processor for executing one or more software or firmware programs. etc.
  • memory merged logic circuitry, and/or other suitable components to support the described functionality.
  • the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
  • Figure 15 shows a schematic structural diagram of an electronic device provided by this application.
  • the dotted line in Figure 15 indicates that this unit or module is optional; the electronic device 900 can be used to implement the image processing method described in the above method embodiment.
  • the electronic device 900 includes one or more processors 901, and the one or more processors 901 can support the electronic device 900 to implement the image processing method in the method embodiment.
  • Processor 901 may be a general-purpose processor or a special-purpose processor.
  • the processor 901 may be a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a field programmable gate array (field programmable). gate array, FPGA) or other programmable logic devices, such as discrete gates, transistor logic devices, or discrete hardware components.
  • the processor 901 can be used to control the electronic device 900, execute software programs, and process data of the software programs.
  • the electronic device 900 may also include a communication unit 905 to implement input (reception) and output (transmission) of signals.
  • the electronic device 900 may be a chip, and the communication unit 905 may be an input and/or output circuit of the chip, or the communication unit 905 may be a communication interface of the chip, and the chip may be used as a component of a terminal device or other electronic device. .
  • the electronic device 900 may be a terminal device, and the communication unit 905 may be a transceiver of the terminal device, or the communication unit 905 may include one or more memories 902 in which the program 904 may be stored. Executed by the processor 901, instructions 903 are generated, so that the processor 901 executes the image processing method described in the above method embodiment according to the instructions 903.
  • data may also be stored in the memory 902 .
  • the processor 901 can also read data stored in the memory 902.
  • the data can be stored at the same storage address as the program 904, or the data can be stored at a different storage address than the program 904.
  • the processor 901 and the memory 902 can be provided separately or integrated together, for example, integrated on a system on chip (SOC) of the terminal device.
  • SOC system on chip
  • the memory 902 can be used to store the related programs 904 of the image processing method provided in the embodiment of the present application, and the processor 901 can be used to call the related programs 904 of the image processing method stored in the memory 902 when executing the image processing method.
  • the image processing method of the embodiment of the present application for example, run the camera application of the electronic device; display the first image, the first image is an image of the photographed object collected based on the first exposure time, the photographed object is a moving object, the first The exposure time is less than the first duration, and the first image includes stripes and/or color cast image areas; the first operation is detected, and the first operation instructs the electronic device to shoot or video; in response to the first operation, the second image is acquired, and the second The image is an image of the subject collected based on the second exposure time, and the second exposure time is an integer multiple of the first duration; based on The first image and the second image are used to obtain a color conversion matrix and/or brightness parameters.
  • the color conversion matrix is used to adjust the color of the first image
  • the brightness parameters are used to adjust the brightness of the first image; based on the color conversion matrix and/or
  • the brightness parameter performs first image processing on the first image to obtain a third image.
  • the third image is an image with stripes and/or color cast image areas removed; the third image is displayed or saved.
  • this application also provides a computer program product, which when executed by the processor 901 implements the image processing method in any method embodiment of this application.
  • the computer program product may be stored in the memory 902, such as the program 904.
  • the program 904 is finally converted into an executable object file that can be executed by the processor 901 through processes such as preprocessing, compilation, assembly, and linking.
  • this application also provides a computer-readable storage medium on which a computer program is stored.
  • a computer program When the computer program is executed by a computer, the image processing method described in any method embodiment of this application is implemented.
  • the computer program may be a high-level language program or an executable object program.
  • the computer-readable storage medium is memory 902.
  • Memory 902 may be volatile memory or non-volatile memory, or memory 902 may include both volatile memory and non-volatile memory.
  • non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the embodiments of the electronic equipment described above are only illustrative.
  • the division of the modules is only a logical function division.
  • there may be other division methods for example, multiple units or components may be The combination can either be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple networks. on the unit. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the size of the sequence numbers of each process does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及图像处理领域,提供了一种图像处理方法和电子设备,该方法应用于电子设备,电子设备所处的拍摄环境的光源为频闪光源,包括:运行电子设备的相机应用程序;显示第一图像,第一曝光时间小于第一时长,第一图像中包括条纹和/或偏色图像区域;检测到第一操作;响应于第一操作,获取第二图像,第二曝光时间为第一时长的整数倍;基于第一图像与第二图像,得到颜色转换矩阵和/或亮度参数;基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理,得到第三图像,第三图像为去除条纹和/或偏色图像区域的图像;显示或者保存第三图像。基于本申请的方案,在拍摄运动对象时能够去除图像中的条纹和/或局部偏色,提升图像质量。

Description

图像处理方法和电子设备
本申请要求于2022年9月15日提交国家知识产权局、申请号为202211123861.X、申请名称为“图像处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像领域,具体地,涉及一种图像处理方法和电子设备。
背景技术
随着电子设备中图像技术的发展,用户对拍摄功能的要求越来越高。例如,用户在拍摄运动对象时,通常期望电子设备能够采集到运动对象在运动瞬间的图像;由于拍摄对象在运动,因此电子设备通常需要减少图像中的运动模糊;目前,为了减少图像中的运动模糊,通常电子设备可以缩短曝光时间,提升感光度值;但是,在带电光源(例如,50HZ交流电)的拍摄场景中,由于缩短曝光时间会导致曝光时间不满足10ms的整数倍;若曝光时间不满足10ms的整数倍,则电子设备采集的图像中会出现图像条带化(banding)问题;例如,明暗相间的banding条纹,或者,局部图像区域偏色问题,从而影响图像质量。
因此,如何去除图像中的图像条带问题和/或局部偏色问题,提升图像质量成为一个亟需解决的问题。
发明内容
本申请提供了一种图像处理方法和电子设备,能够去除图像中的图像条带问题和/或局部偏色问题,提升图像质量。
第一方面,提供了一种图像处理方法,应用于电子设备,所述电子设备所处的拍摄环境的光源为频闪光源,所述图像处理方法包括:
运行所述电子设备的相机应用程序;
显示第一图像,所述第一图像为基于第一曝光时间采集的拍摄对象的图像,所述拍摄对象为运动对象,所述第一曝光时间小于第一时长,所述第一图像中包括条纹和/或偏色图像区域;
检测到第一操作,所述第一操作指示所述电子设备拍摄或者录像;
响应于所述第一操作,获取第二图像,所述第二图像为基于第二曝光时间采集的所述拍摄对象的图像,所述第二曝光时间为所述第一时长的整数倍;
基于所述第一图像与所述第二图像,得到颜色转换矩阵和/或亮度参数,所述颜色转换矩阵用于对所述第一图像进行颜色调整,所述亮度参数用于对所述第一图像进行亮度调整;
基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行第一图像处理,得到第三图像,所述第三图像为去除所述条纹和/或偏色图像区域的图像;
显示或者保存所述第三图像。
在本申请的实施例中,在电子设备所处的拍摄环境的光源为频闪光源,由于电子设备需要拍摄运动物体从而降低曝光时间导致图像中出现banding条纹的情况下;在本申请的实施例中,通过以第二图像(例如,正常曝光图像)为基准对第一图像(曝光图像)进行颜色迁移与亮度迁移,得到颜色转换矩阵和/或亮度参数;基于颜色转换矩阵和/或亮度参数,去除第一图像中的条纹和/或偏色图像区域;确保在采集运动拍摄对象的运动瞬间图像时,去除图像中的条纹和/或偏色图像区域,提高图像质量。
结合第一方面,在第一方面的某些实现方式中,还包括:
将所述第一图像与所述第二图像输入至图像处理模型,得到双边网格数据;其中,所述图像处理模型用于以所述第二图像为基准对所述第一图像进行颜色迁移处理与亮度迁移处理,所述双边网格数据包括所述颜色转换矩阵和/或所述亮度参数,所述第一图像的尺寸与所述第二图像的尺寸相同。
本申请的实施例中,通过图像处理模型能够得到将正常曝光时间或者长曝光时间得到的图像帧的颜色与亮度迁移至短曝光时间的图像帧的双边网格;在交流电为50HZ的频闪光源的拍摄场景中,由于正常曝光时间或者长曝光时间为10ms的整数倍,因此,长曝光图像与正常曝光图像中通常不存在图像条纹或者偏色图像区域;基于图像处理模型输出的双边网格得到颜色转换矩阵和/或亮度参数;通过图像处理模型对第一图像进行颜色迁移与亮度迁移时,能够识别第一图像与第二图像之间的图像内容差异部分;因此,通过图像处理模型得到的得到颜色转换矩阵和/或亮度参数,在对第一图像进行颜色调整和/或亮度调整时不会引入鬼影区域,提高图像质量。
结合第一方面,在第一方面的某些实现方式中,所述基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行第一图像处理,得到第三图像,包括:
基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行插值处理,得到所述第三图像。
在一种可能的实现方式中,可以对颜色转换矩阵和/或亮度参数与第一图像的矩阵进行相乘,得到第三图像。
结合第一方面,在第一方面的某些实现方式中,所述基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行插值处理,得到所述第三图像,包括:
在第一颜色空间,基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行所述第一图像处理,得到处理后的图像;
在所述第一颜色空间,对所述处理后的图像进行第二图像处理,得到所述第三图像,所述第二图像处理为所述第一颜色空间中的颜色处理算法。
在本申请的实施例中,在图像信号处理器对电子设备采集的Raw图像进行处理的过程中,在执行第一颜色空间的算法时,可以先执行基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理再执行第一颜色空间的其他算法;由于第一图像处理能够得到去除banding条纹和/或偏色图像区域的图像;换而言之,第一图像处理能够对图像进行颜色与亮度的校正;在颜色与亮度的准确性较高的情况下,再执行第一颜色空间的其他算法能够确保在图像的颜色与亮度的准确性较高的基础上,对去除banding条纹和/或偏色图像区域的图像进行第一颜色空间的其他算法处理,从而提升图像质量。
结合第一方面,在第一方面的某些实现方式中,所述获取第二图像之前,还包括:对所述电子设备所处的拍摄场景进行检测,检测到所述运动对象;且检测到所述第一图像中存在所述条纹和/或偏色图像区域。
结合第一方面,在第一方面的某些实现方式中,所述第一时长的大小为基于所述频闪光源在每秒的亮暗次数得到的。
结合第一方面,在第一方面的某些实现方式中,所述第一时长=1000/所述频闪光源在每秒的亮暗次数。
结合第一方面,在第一方面的某些实现方式中,所述频闪光源在每秒的亮暗次数与所述频闪光源的工作电压的频率关联。
在一种可能的实现方式中,频闪光源工作电压的频率为50HZ,即频闪光源在每秒的亮暗次数为100次,则第一时长为10ms;此时,第一图像的曝光时间小于10ms,第二图像的曝光时间为10ms的整数倍。
在一种可能的实现方式中,频闪光源工作电压的频率为60HZ,即频闪光源在每秒的亮暗次数为120次,则第一时长为8.3ms;此时,第一图像的曝光时间小于8.3ms,第二图像的曝光时间为8.3ms的整数倍。
结合第一方面,在第一方面的某些实现方式中,所述图像处理模型为卷积神经网络。
在一种可能的实现方式中,图像处理模型为HDRnet。
结合第一方面,在第一方面的某些实现方式中,所述图像处理模型是通过以下方法训练得到的:
获取样本数据,所述样本数据包括第一样本图像、第二样本图像与第三样本图像,所述第二样本图像中包括所述第一样本图像的图像内容、条纹和/或偏色图像区域,所述第三样本图像与所述第一样本图像具有相同的图像内容,所述第三样本图像的图像质量高于所述第一样本图像的图像质量;
将所述第一样本图像与所述第二样本图像输入至待训练的图像处理模型,得到预测双边网格数据;
基于所述预测双边网格数据对所述第二样本图像进行插值处理,得到预测图像;
基于预测图像与所述第三样本图像之间的差异训练所述待训练的图像处理模型,得到所述图像处理模型。
结合第一方面,在第一方面的某些实现方式中,所述电子设备包括图像信号处理器,所述第一图像为所述图像信号处理器输出的图像。
在本申请的实施例中,第一图像可以为图像信号处理器输出的图像;由于第一图像为信号处理器输出的图像,图像信号处理器可以对短曝光的Raw图像进行去噪处理,因此虽然第一图像中包括banding条纹,但是第一图像中的图像细节信息较丰富。
结合第一方面,在第一方面的某些实现方式中,所述第二图像为对所述电子设备采集的Raw图像进行第三图像处理得到的图像,所述第三图像处理包括颜色空间转换处理。
应理解,在本申请的实施例中,第二图像为正常曝光图像;例如,第二图像可以为对正常曝光的Raw图像进行下采样与颜色空间转换处理后得到的图像;由于在本申 请的实施例中,需要第二图像中的颜色信息与亮度信息对第一图像进行迁移处理,因此对第二图像中的细节信息的要求较低,即第二图像可以无需经过图像信号处理器的处理。
第二方面,提供了一种电子设备,电子设备包括一个或多个处理器与存储器;存储器与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,一个或多个处理器调用计算机指令以使得电子设备执行:
运行所述电子设备的相机应用程序;
显示第一图像,所述第一图像为基于第一曝光时间采集的拍摄对象的图像,所述拍摄对象为运动对象,所述第一曝光时间小于第一时长,所述第一图像中包括条纹和/或偏色图像区域;
检测到第一操作,所述第一操作指示所述电子设备拍摄或者录像;
响应于所述第一操作,获取第二图像,所述第二图像为基于第二曝光时间采集的所述拍摄对象的图像,所述第二曝光时间为所述第一时长的整数倍;
基于所述第一图像与所述第二图像,得到颜色转换矩阵和/或亮度参数,所述颜色转换矩阵用于对所述第一图像进行颜色调整,所述亮度参数用于对所述第一图像进行亮度调整;
基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行第一图像处理,得到第三图像,所述第三图像为去除所述条纹和/或偏色图像区域的图像;
显示或者保存所述第三图像。
结合第二方面,在第二方面的某些实现方式中,一个或多个处理器调用计算机指令以使得电子设备执行:
将所述第一图像与所述第二图像输入至图像处理模型,得到双边网格数据;其中,所述图像处理模型用于以所述第二图像为基准对所述第一图像进行颜色迁移处理与亮度迁移处理,所述双边网格数据包括所述颜色转换矩阵和/或所述亮度参数,所述第一图像的尺寸与所述第二图像的尺寸相同。
结合第二方面,在第二方面的某些实现方式中,一个或多个处理器调用计算机指令以使得电子设备执行:
基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行插值处理,得到所述第三图像。
结合第二方面,在第二方面的某些实现方式中,一个或多个处理器调用计算机指令以使得电子设备执行:
在第一颜色空间,基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行所述第一图像处理,得到处理后的图像;
在所述第一颜色空间,对所述处理后的图像进行第二图像处理,得到所述第三图像,所述第二图像处理为所述第一颜色空间中的颜色处理算法。
结合第二方面,在第二方面的某些实现方式中,一个或多个处理器调用计算机指令以使得电子设备执行:对所述电子设备所处的拍摄场景进行检测,检测到所述运动对象;且检测到所述第一图像中存在所述条纹和/或偏色图像区域。
结合第二方面,在第二方面的某些实现方式中,所述第一时长的大小为基于所述 频闪光源在每秒的亮暗次数得到的。
结合第二方面,在第二方面的某些实现方式中,所述第一时长=1000/所述频闪光源在每秒的亮暗次数。
结合第二方面,在第二方面的某些实现方式中,所述频闪光源在每秒的亮暗次数与所述频闪光源的工作电压的频率关联。
结合第二方面,在第二方面的某些实现方式中,所述图像处理模型为卷积神经网络。
结合第二方面,在第二方面的某些实现方式中,所述图像处理模型是通过以下方法训练得到的:
获取样本数据,所述样本数据包括第一样本图像、第二样本图像与第三样本图像,所述第二样本图像中包括所述第一样本图像的图像内容、条纹和/或偏色图像区域,所述第三样本图像与所述第一样本图像具有相同的图像内容,所述第三样本图像的图像质量高于所述第一样本图像的图像质量;
将所述第一样本图像与所述第二样本图像输入至待训练的图像处理模型,得到预测双边网格数据;
基于所述预测双边网格数据对所述第二样本图像进行插值处理,得到预测图像;
基于预测图像与所述第三样本图像之间的差异训练所述待训练的图像处理模型,得到所述图像处理模型。
结合第二方面,在第二方面的某些实现方式中,所述电子设备包括图像信号处理器,所述第一图像为所述图像信号处理器输出的图像。
结合第二方面,在第二方面的某些实现方式中,所述第二图像为对所述电子设备采集的Raw图像进行第三图像处理得到的图像,所述第三图像处理包括颜色空间转换处理。
第三方面,提供了一种电子设备,包括用于执行第一方面或者第一方面中的任意一种实现方式中的图像处理方法的模块/单元。
第四方面,提供一种电子设备,所述电子设备包括一个或多个处理器和存储器与;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行第一方面或者第一方面中的任意一种实现方式中的图像处理方法。
第五方面,提供了一种芯片系统,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行第一方面或第一方面中的任一种图像处理方法。
第六方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或者第一方面中的任意一种实现方式中的图像处理方法。
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或者第一方面中的任意一种实现方式中的图像处理方法。
在电子设备所处的拍摄环境的光源为频闪光源,由于电子设备需要拍摄运动物体 从而降低曝光时间导致图像中出现banding(例如,banding条纹和/或偏色图像区域)的情况下;在本申请的实施例中,通过以正常曝光图像为基准对短曝光图像进行颜色迁移与亮度迁移,得到颜色转换矩阵和/或亮度参数;基于颜色转换矩阵和/或亮度参数,去除短曝光图像中的banding条纹;确保在采集运动拍摄对象的运动瞬间图像时,去除图像中的banding条纹,提高图像质量。
此外,在本申请的实施例中,基于图像处理模型输出的双边网格得到颜色转换矩阵和/或亮度参数;通过图像处理模型对第一图像进行颜色迁移与亮度迁移时,第一图像与第二图像之间图像内容差异较大的偏差区域,即图像处理模型能够识别第二图像与第一图像之间的鬼影区域;因此,通过图像处理模型得到的得到颜色转换矩阵和/或亮度参数,在对第一图像进行颜色调整和/或亮度调整时不会引入鬼影区域,提高图像质量。
附图说明
图1是一种适用于本申请的电子设备的硬件系统的示意图;
图2是一种适用于本申请的电子设备的软件系统的示意图;
图3是一种适用于本申请实施例的应用场景的示意图;
图4是一种适用于本申请实施例的图形用户界面的示意图;
图5是本申请实施例提供的一种图像处理方法的示意性流程图;
图6是本申请实施例提供的一种图像信号处理器处理方法的示意性流程图;
图7是本申请实施例提供的一种图像处理方法的示意性流程图;
图8是本申请实施例提供的一种图像处理模型的训练方法的示意性流程图;
图9是一种适用于本申请实施例的图形用户界面的示意图;
图10是一种适用于本申请实施例的图形用户界面的示意图;
图11是一种适用于本申请实施例的图形用户界面的示意图;
图12是一种适用于本申请实施例的图形用户界面的示意图;
图13是一种适用于本申请实施例的图形用户界面的示意图;
图14是本申请实施例提供的一种电子设备的结构示意图;
图15是本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
在本申请的实施例中,以下术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
为了便于对本申请实施例的理解,首先对本申请实施例中涉及的相关概念进行简要说明。
1、频闪(flicker)
交流电网中的传输的能量并不是稳定不变的,而是随着一个固定频率进行变化,这个频率一般被称为工频;工频的带来的这种能量变化称为flicker。
2、图像条带(banding)
在频闪光源的拍摄环境中,电子设备中的图像传感器捕捉到flicker而在图像上形成的条带的现象称为图像条带banding现象,通常可以称为banding。
示例性地,对于50HZ交流电,即每秒钟灯光亮暗100次的带电光源;若电子设备的曝光时间满足10ms的整数倍,则曝光积分周期可以抵消banding;若电子设备的曝光时间不满足10ms的整数倍,则采集图像时的进光量会按交流电正弦波规律波动,导致图像中出现有规律的条纹。
3、双边网格(bilateral grid)
双边网格本质上是一个数据结构;例如,对于单通道的灰度值,双边网格可以为结合图像二维的空间域信息以及一维的灰度信息,得到的一个三维数组。
4、曝光时间
曝光时间是指从相机快门打开到关闭这段时间内光线照射到底片或感光器的时间。
5、神经网络
神经网络是指将多个单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入;每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。
6、卷积神经网络(convolutional neuron network,CNN)
卷积神经网络是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征选取器,该特征选取器可以看作是滤波器。卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。在卷积神经网络的卷积层中,一个神经元可以只与部分邻层神经元连接。一个卷积层中,通常包含若干个特征平面,每个特征平面可以由一些矩形排列的神经单元组成。同一特征平面的神经单元共享权重,这里共享的权重就是卷积核。共享权重可以理解为提取图像信息的方式与位置无关。卷积核可以以随机大小的矩阵的形式初始化,在卷积神经网络的训练过程中卷积核可以通过学习得到合理的权重。另外,共享权重带来的直接好处是减少卷积神经网络各层之间的连接,同时又降低了过拟合的风险。
7、HDRnet
HDRNet是一种典型的3D插值网格;例如,空域上可以划分成了16x16,值域上可以划分成8个区间。
8、反向传播算法
神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的大小,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。
下面将结合附图,对本申请实施例中提供的图像处理方法和电子设备进行描述。
图1示出了一种适用于本申请的电子设备的硬件系统。
电子设备100可以是手机、智慧屏、平板电脑、可穿戴电子设备、车载电子设备、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、笔记本电脑、 超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、投影仪等等,本申请实施例对电子设备100的具体类型不作任何限制。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
需要说明的是,图1所示的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图1所示的部件更多或更少的部件,或者,电子设备100可以包括图1所示的部件中某些部件的组合,或者,电子设备100可以包括图1所示的部件中某些部件的子部件。图1示的部件可以以硬件、软件、或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元。例如,处理器110可以包括以下处理单元中的至少一个:应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、神经网络处理器(neural-network processing unit,NPU)。其中,不同的处理单元可以是独立的器件,也可以是集成的器件。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。例如,处理器110可以包括以下接口中的至少一个:内部集成电路(inter-integrated circuit,I2C)接口、内部集成电路音频(inter-integrated circuit sound,I2S)接口、脉冲编码调制(pulse code modulation,PCM)接口、通用异步接收传输器(universal asynchronous receiver/transmitter,UART)接口、移动产业处理器接口(mobile industry processor interface,MIPI)、通用输入输出(general-purpose input/output,GPIO)接口、SIM接口、USB接口。
示例性地,在本申请的实施例中,处理器110可以用于执行本申请实施例提供的图像处理方法;例如,行电子设备的相机应用程序;显示第一图像,第一图像为基于第一曝光时间采集的拍摄对象的图像,拍摄对象为运动对象,第一曝光时间小于第一时长,第一图像中包括条纹和/或偏色图像区域;检测到第一操作,第一操作指示电子设备拍摄或者录像;响应于第一操作,获取第二图像,第二图像为基于第二曝光时间采集的拍摄对象的图像, 第二曝光时间为第一时长的整数倍;基于第一图像与第二图像,得到颜色转换矩阵和/或亮度参数,颜色转换矩阵用于对第一图像进行颜色调整,亮度参数用于对第一图像进行亮度调整;基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理,得到第三图像,第三图像为去除条纹和/或偏色图像区域的图像;显示或者保存第三图像。
图1所示的各模块间的连接关系只是示意性说明,并不构成对电子设备100的各模块间的连接关系的限定。可选地,电子设备100的各模块也可以采用上述实施例中多种连接方式的组合。
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等器件实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
电子设备100可以通过GPU、显示屏194以及应用处理器实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194可以用于显示图像或视频。
可选地,显示屏194可以用于显示图像或视频。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体(active-matrix organic light-emitting diode,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、迷你发光二极管(mini light-emitting diode,Mini LED)、微型发光二极管(micro light-emitting diode,Micro LED)、微型OLED(Micro OLED)或量子点发光二极管(quantum dot light emitting diodes,QLED)。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
示例性地,电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。
示例性地,ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过摄像头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP可以对图像的噪点、亮度和色彩进行算法优化,ISP还可以优化拍摄场景的曝光和色温等参数。在一些实施例中,ISP可以设置在摄像头193中。
示例性地,摄像头193(也可以称为镜头)用于捕获静态图像或视频。可以通过应用程序指令触发开启,实现拍照功能,如拍摄获取任意场景的图像。摄像头可以包括成像镜头、滤光片、图像传感器等部件。物体发出或反射的光线进入成像镜头,通过滤光片,最终汇聚在图像传感器上。成像镜头主要是用于对拍照视角中的所有物体(也可以称为待拍摄场景、目标场景,也可以理解为用户期待拍摄的场景图像)发出或反射的光汇聚成像;滤光片主要是用于将光线中的多余光波(例如除可见光外的光波,如红外)滤去;图像传感器可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。图像传感器主要是用于对接收到的光信号进行光电转换,转换成电信号,之后将电信号传递给ISP转换成数字图像 信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。
示例性地,数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
示例性地,视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3和MPEG4。
示例性地,陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x轴、y轴和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。例如,当快门被按下时,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航和体感游戏等场景。
示例性地,加速度传感器180E可检测电子设备100在各个方向上(一般为x轴、y轴和z轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。加速度传感器180E还可以用于识别电子设备100的姿态,作为横竖屏切换和计步器等应用程序的输入参数。
示例性地,距离传感器180F用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,例如在拍摄场景中,电子设备100可以利用距离传感器180F测距以实现快速对焦。
示例性地,环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
示例性地,指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现解锁、访问应用锁、拍照和接听来电等功能。
示例性地,触摸传感器180K,也称为触控器件。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,触摸屏也称为触控屏。触摸传感器180K用于检测作用于其上或其附近的触摸操作。触摸传感器180K可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,并且与显示屏194设置于不同的位置。
上文详细描述了电子设备100的硬件系统,下面介绍电子设备100的软件系统。
图2是本申请实施例提供的电子设备的软件系统的示意图。
如图2所示,系统架构中可以包括应用层210、应用框架层220、硬件抽象层230、驱动层240以及硬件层250。
应用层210可以包括相机应用程序。
可选地,应用层210还可以包括图库、日历、通话、地图、导航、WLAN、蓝牙、音 乐、视频、短信息等应用程序。
应用框架层220为应用程序层的应用程序提供应用程序编程接口(application programming interface,API)和编程框架;应用程序框架层可以包括一些预定义的函数。
例如,应用框架层220可以包括相机访问接口;相机访问接口中可以包括相机管理与相机设备。其中,相机管理可以用于提供管理相机的访问接口;相机设备可以用于提供访问相机的接口。
硬件抽象层230用于将硬件抽象化。比如,硬件抽象层可以包相机抽象层以及其他硬件设备抽象层;相机抽象层中可以包括相机设备1、相机设备2等;相机硬件抽象层可以与相机算法库相连接,相机硬件抽象层可以调用相机算法库中的算法。
示例性地,相机算法库中可以包括图像处理算法,运行图像处理算法时用于执行本申请实施例提供的图像处理方法。
驱动层240用于为不同硬件设备提供驱动。比如,驱动层可以包括相机设备驱动。
硬件层250可以包括图像传感器、图像信号处理器以及其他硬件设备。
目前,用户在拍摄运动对象时,通常期望电子设备能够采集到运动对象在运动瞬间的图像;由于拍摄对象在运动,因此电子设备通常需要减少图像中的运动模糊;目前,为了减少图像中的运动模糊,通常电子设备可以缩短曝光时间,提升感光度值;但是,在频闪光源(例如,50HZ交流电)的拍摄场景中,由于缩短曝光时间会导致曝光时间不满足10ms的整数倍;若曝光时间不满足10ms的整数倍,则电子设备采集的图像中会出现图像条带(banding)问题,即图像中出现明暗相间的banding条纹;其中,banding可以包括Luma banding与Chroma banding;Luma banding是指由于缩短曝光时间产生的banding条纹;其中,Luma banding仅有明暗变换,无颜色改变;Chroma banding是指图像中的局部区域出现图像偏色,图像中的局部区域与图像整体颜色不符,出现红色,绿色或者蓝色的偏色;产生Luma banding的原因在于:对于50HZ交流电的频闪光源的拍摄场景,若电子设备的曝光时间不是10ms的整数倍时,拍摄环境中频闪光源产生的交流电正弦波段无法与曝光积分周期相抵消,使得采集的图像中出现亮度有规律的条纹;产生Chroma banding的原因在于:在频闪光源的电压变化10%时,光源的色温会发生较大的变化(例如,1000K左右);图像的成像与色温相关;由于带电光源的电压发生较小的变化时,色温会出现较大的变化;导致图像中出现偏色问题;因此,若图像中出现banding则会严重影响图像质量。
有鉴于此,本申请的实施例提供了一种图像处理方法和电子设备;在电子设备所处的拍摄环境的光源为频闪光源,由于电子设备需要拍摄运动物体从而降低曝光时间导致图像中出现banding条纹和/或偏色图像区域的情况下;在本申请的实施例中,通过以正常曝光图像为基准对短曝光图像进行颜色迁移与亮度迁移,从而去除短曝光图像中的banding条纹与偏色图像区域;确保在采集运动拍摄对象的运动瞬间图像时,去除图像中的banding条纹与偏色图像区域,提高图像质量。
图3是本申请实施例提供的图像处理方法的应用场景的示意图。
示例性地,本申请实施例中的图像处理方法可以应用于拍照领域;例如,本申请的图像处理方法可以应用于在频闪光源的拍摄环境(例如,室内拍摄环境)中拍摄运动对象;其中,运动对象可以是指运动的用户,移动的物体,或者,视频中播放的图像(例 如,电影)等。
例如,如图3所示,拍摄对象270在频闪光源的室内场所运动,室内照明设备(例如,电灯260)提供照明,电子设备100运行相机应用,电子设备100采集包括拍摄对象270的图像的过程中,显示屏上可以实时显示包括拍摄对象270的预览图像;用户在查看电子设备100显示屏的预览图像时,若要抓拍拍摄对象270运动瞬间的图像,点击拍摄界面的拍摄控件即可。当电子设备100的拍摄控件被触发,电子设备100可以抓拍到拍摄对象270运动瞬间的图像。
可选地,图3中以拍摄对象270为用户进行举例说明;拍摄对象还可以是动物、车辆、机器人等。例如,拍摄对象在频闪光源的拍摄环境中运动,可以是用户在室内打羽毛球、打乒乓球、练习瑜伽等。其中,电子设备的拍摄视角中的拍摄对象可以是一个用户,也可以是多用户。
另外,室内场所需要使用照明设备提供光照,照明设备可以在交流电的驱动下工作。以照明设备是日光灯为例,日光灯的工作原理是在高压电流的作用下,灯管内的惰性气体放电,放电产生的电子到达日光灯的灯管使得日光灯发光。以日光灯的工作电压是50Hz交流电为例,当日光灯在交流电的驱动下工作时,50Hz的交流电使得日光灯在1s中频闪次数为100。
可选地,日光灯的工作电压是60Hz交流电为例,当日光灯在交流电的驱动下工作时,60Hz的交流电使得日光灯在1s中频闪次数为120。
示例性地,在本申请的实施例中,电子设备处于拍照模式,电子设备的显示屏可以显示预览图像,当电子设备的拍摄控件被触发,电子设备可以生成抓拍图像(也可被称为拍摄图像)。当电子设备抓拍拍摄对象的运动瞬间的过程,拍摄场景中光源的频闪会影响图像传感器采集的图像,可能会使得预览图像流中的图像出现banding条纹,和/或,使得电子设备采集的图像出现banding条纹;例如,如图4所示。
本申请实施例的图像处理方法可以应用于相机应用程序的拍照模式;通过本申请实施例提供的图像处理方法,在电子设备处于频闪光源的拍摄环境中,拍摄运动物体时,能够去除图像中的banding条纹与偏色图像区域,提高图像的颜色准确性与亮度准确性;提高图像质量。
示例性地,在相机应用程序处于预览状态(例如,拍照预览)时;电子设备显示的预览图像中包括banding条纹;在电子设备的拍摄控件被触发后,可以执行本申请实施例提供的图像处理方法,得到去除banding的图像;去除图像banding的图像可以是指去除图像中的局部偏色问题(例如,Chroma banding),和/或,去除图像中的明暗相间的条纹(例如,Luma banding)。
可选地,在电子设备具有足够的运算能力的情况下,本申请实施例中的图像处理方法还可以应用于录制视频领域、视频通话领域或者其他图像处理领域。
示例性地,视频通话场景可以包括但不限于以下场景中:
视频通话、视频会议应用、长短视频应用、视频直播类应用、视频网课应用、人像智能运镜应用场景、系统相机录像功能录制视频、视频监控,或者智能猫眼等人像拍摄类场景等。
应理解,上述为对应用场景的举例说明,并不对本申请的应用场景作任何限定。
下面结合图6至图12对本申请实施例提供的图像处理方法进行详细地描述。
图5是本申请实施例提供的一种图像处理方法的示意性流程图。该方法300包括可以由图1所示的电子设备执行;该方法300包括步骤S310至步骤S350,下面分别对步骤S310至步骤S350进行详细的描述。
应理解,图5所示的方法应用于电子设备所处拍摄环境的光源为频闪光源;由于本申请的方案光源为频闪光源,因此在电子设备获取图像时的曝光时间不满足(1000/每秒钟光源亮暗次数)毫秒的整数倍时,图像中会出现banding条纹。
步骤S310、运行电子设备中的相机应用程序。
例性地,用户可以通过单击“相机”应用程序的图标,指示电子设备运行相机应用;或者,电子设备处于锁屏状态时,用户可以通过在电子设备的显示屏上向右滑动的手势,指示电子设备运行相机应用。又或者,电子设备处于锁屏状态,锁屏界面上包括相机应用程序的图标,用户通过点击相机应用程序的图标,指示电子设备运行相机应用程序。又或者,电子设备在运行其他应用时,该应用具有调用相机应用程序的权限;用户通过点击相应的控件可以指示电子设备运行相机应用程序。例如,电子设备正在运行即时通信类应用程序时,用户可以通过选择相机功能的控件,指示电子设备运行相机应用程序等。
应理解,上述为对运行相机应用程序的操作的举例说明;还可以通过语音指示操作,或者其它操作的指示电子设备运行相机应用程序;本申请对此不作任何限定。
还应理解,运行相机应用程序可以是指启动相机应用程序。
步骤S320、显示第一图像,第一图像中包括条纹和/或偏色图像区域。
其中,第一图像为基于第一曝光时间采集的拍摄对象的图像,拍摄对象为运动对象,第一曝光时间小于第一时长。
可选地,第一图像中可以包括条纹,即第一图像中包括Luma banding。
可选地,第一图像中可以包括条纹与偏色图像区域,即第一图像中包括Luma banding与Chroma banding。
应理解,Luma banding指由于缩短曝光时间产生的banding条纹;其中,Luma banding仅有明暗变换,无颜色改变;产生Luma banding的原因在于:拍摄环境中频闪光源产生的交流电正弦波段无法与曝光积分周期相抵消,使得采集的图像中出现亮度有规律的条纹。
应理解,Chroma banding是指图像中的局部区域出现图像偏色,图像中的局部区域与图像整体颜色不符,出现红色,绿色或者蓝色等偏色;产生Chroma banding的原因在于:能量变化的同时,伴随色温的变化,导致图像中出现偏色问题;例如,对于Luma banding中明暗相间的条纹中呈现不同的颜色,则图像中出现Chroma banding。
还应理解,条纹可以是指全局banding,即图像的整体区域中会出现明暗相间的条纹;偏色图像区域可以是指局部banding,即图像中出现局部偏色区域,该偏色区域与图像整体的颜色不相符;例如,可以是出现红色,绿色或者蓝色的偏色。
还应理解,由于第一图像的曝光时间小于第一时长,因此曝光积分周期无法抵消banding;因此,第一图像中可能会出现banding条纹;此外,在频闪光源的电压变化10%时,光源的色温会发生较大的变化(例如,1000K左右);图像的成像与色温相关;由于带电光源的电压发生较小的变化时,色温会出现较大的变化;导致图像中出现偏色问题。
示例性地,第一图像可以为如图10中的(a)所示的图像,图像中包括明暗相间的条纹与偏色图像区域704;图像区域704可以是指红色,绿色,蓝色或者其他颜色的偏色图像区域。
可选地,第一图像可以是指电子设备中预览界面中的预览图像;其中,预览界面可以是指拍照预览界面,或者,录像预览界面。
步骤S330、检测到第一操作,第一操作指示所述电子设备拍摄或者录像。
示例性地,在相机应用程序处于拍照模式,则第一操作可以是指点击拍照控件的操作,如图11中的(b)所示。
示例性地,在相机应用程序处于录像模式,则第一操作可以是指点击录像控件的操作。应理解,上述是对第一操作的举例说明;还可以通过语音指示操作,或者其它操作的指示电子设备拍照或者录像;本申请对此不作任何限定。
可选地,运动对象可以是指运动的用户,移动的物体,或者,视频中播放的图像(例如,电影)等。
可选地,电子设备中包括图像信号处理器,第一图像可以为图像信号处理器输出的图像。
在本申请的实施例中,第一图像可以为图像信号处理器输出的图像;由于第一图像为信号处理器输出的图像,图像信号处理器可以对短曝光的Raw图像进行去噪处理,因此虽然第一图像中包括banding条纹,但是第一图像中的图像细节信息较丰富。
步骤S340、响应于第一操作,获取第二图像。
其中,第二图像为基于第二曝光时间采集的拍摄对象的图像,第二曝光时间为第一时长的整数倍。
应理解,由于第二图像的曝光时间为第一时长的整数倍,因此曝光积分周期可以抵消banding;因此,第二图像中不会出现banding条纹。
示例性地,对于50HZ交流电的频闪光源,第二图像的曝光时间可以为10ms的整数倍。
应理解,由于第二图像的曝光时间满足第一时长的整数倍,因此第二图像中每一行像素接收的能量相同,因此第二图像中不会存在能量波动,也不会存在偏色问题;因此,第二图像中不会出现条纹与偏色图像区域,即第二图像中不存在Luma banding与Chroma banding。
可选地,第一时长的大小为基于频闪光源在每秒的亮暗次数得到的。
可选地,频闪光源在每秒的亮暗次数与频闪光源的工作电压的频率关联。
示例性地,第一时长=1000/频闪光源在每秒的亮暗次数。
在一个示例中,频闪光源工作电压的频率为50HZ,即频闪光源在每秒的亮暗次数为100次,则第一时长为10ms;此时,第一图像的曝光时间小于10ms,第二图像的曝光时间为10ms的整数倍。
在一个示例中,频闪光源工作电压的频率为60HZ,即频闪光源在每秒的亮暗次数为120次,则第一时长为8.3ms;此时,第一图像的曝光时间小于8.3ms,第二图像的曝光时间为8.3ms的整数倍。
应理解,上述是对第一时长的举例说明;本申请对第一时长的大小不作任何限定。
可选地,第二图像为对电子设备采集的Raw图像进行第三图像处理得到的图像,第三图像处理包括颜色空间转换处理。
示例性地,电子设备可以采集正常曝光的Raw图像,对正常曝光的Raw图像进行颜色空间转换处理,得到第二像。
例如,第二图像可以为RGB颜色空间的图像,或者,YUV颜色空间的图像。
应理解,在本申请的实施例中,第二图像为正常曝光图像;例如,第二图像可以为对正常曝光的Raw图像进行下采样与颜色空间转换处理后得到的图像;由于在本申请的实施例中,需要第二图像中的颜色信息与亮度信息对第一图像进行迁移处理,因此对第二图像中的细节信息的要求较低,即第二图像可以无需经过图像信号处理器的处理。
可选地,在获取第二图像之前,还包括:对电子设备所处的拍摄场景进行检测,检测到运动对象;且检测到第一图像中存在条纹和/或偏色图像区域。
示例性地,电子设备中可以包括检测模块,检测模块可以对拍摄对象进行检测;当拍摄对象中包括运动对象时,检测模块输出标识,该标识可以指示拍摄场景中包括运动对象。
示例性地,电子设备中的防闪烁传感器(Flicker Sensor)可以用于检测是否存在条纹;其中,防闪烁传感器(Flicker Sensor)可以为一种对环境光进行采样的传感器。
在本申请的实施例中,在拍摄场景中存在运动对象且采集的图像中包括banding时,可以触发电子设备执行本申请实施例提供的图像处理方法,即去除图像中banding的方法。
应理解,若拍摄场景中包括运动对象,则在电子设备采集运动对象在运动瞬间的图像;由于拍摄对象在运动,因此电子设备通常需要减少图像中的运动模糊;为了减少图像中的运动模糊,通常电子设备可以缩短曝光时间,提升感光度值;但是,对于频闪光源的拍摄环境而言,降低曝光时间会导致图像中出现banding,因此需要对图像中的banding进行处理。
步骤S350、基于第一图像与第二图像,得到颜色转换矩阵和/或亮度参数。
其中,颜色转换矩阵用于对第一图像进行颜色调整,亮度参数用于对第一图像进行亮度调整。
在本申请的实施例中,由于第二图像为正常曝光时间的图像,因此第二图像中不存在banding;可以基于将第二图像的颜色与亮度迁移至第一图像中,得到颜色转换矩阵与亮度参数。
可选地,上述方法还包括:
将第一图像与第二图像输入至图像处理模型,得到双边网格数据;其中,图像处理模型用于以第二图像为基准对第一图像进行颜色迁移处理与亮度迁移处理,双边网格数据包括颜色转换矩阵和/或亮度参数,第一图像的尺寸与第二图像的尺寸相同。
示例性地,图像处理模型为卷积神经网络;例如,图像处理模型可以为HDRnet。可选地,图像处理模型的训练方法可以参见后续图8所示的相关描述。
在本申请的实施例中,基于图像处理模型输出的双边网格得到颜色转换矩阵和/或亮度参数;通过图像处理模型对第一图像进行颜色迁移与亮度迁移时,第一图像与第二图像之间图像内容差异较大的偏差区域,即图像处理模型能够识别第二图像与第 一图像之间的鬼影区域;因此,通过图像处理模型得到的得到颜色转换矩阵和/或亮度参数,在对第一图像进行颜色调整和/或亮度调整时不会引入鬼影区域,提高图像质量。
可选地,可以对第一图像与第二图像先进行配准处理与平滑处理;对配准处理后的第一图像与第二图像进行逐像素的作差,得到颜色转换矩阵和/或亮度参数。
步骤S360、基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理,得到第三图像。
其中,第三图像为去除条纹和/或偏色图像区域的图像。
示例性地,可以基于颜色转换矩阵对第一图像进行颜色调整,去除第一图像中的偏色图像区域;基于亮度参数对第一图像进行亮度调整,去除第一图像中的banding条纹;例如,如图11中的(c)所示,基于颜色转换矩阵可以去除图像中的图像区域704;基于亮度参数可以去除图像中的明暗相间的条纹。
示例性地,第一图像可以如图11中的(d)所示,第三图像可以如图12所示。
可选地,基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理,得到第三图像,包括:
基于颜色转换矩阵和/或亮度参数对第一图像进行插值处理,得到第三图像。
示例性地,可以对颜色转换矩阵和/或亮度参数与第一图像的矩阵进行相乘,得到第三图像。
可选地,基于颜色转换矩阵和/或亮度参数对第一图像进行插值处理,得到第三图像,包括:
在第一颜色空间,基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理,得到处理后的图像;
在第一颜色空间,对处理后的图像进行第二图像处理,得到第三图像,第二图像处理为第一颜色空间中的颜色处理算法。
示例性地,第一颜色空间可以为RGB颜色空间;即第一图像处理可以在RGB颜色空间执行。
例如,在RGB颜色空间中,可以先执行第一图像处理;再执行其他RGB颜色空间中与颜色处理相关的算法。
在本申请的实施例中,在图像信号处理器对电子设备采集的Raw图像进行处理的过程中,在执行RGB域算法时,可以先执行基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理再执行其他RGB域算法;由于第一图像处理能够得到去除banding条纹和/或偏色图像区域的图像;换而言之,第一图像处理能够对图像进行颜色与亮度的校正;在颜色与亮度的准确性较高的情况下,再执行其他RGB域算法能够确保在图像的颜色与亮度的准确性较高的基础上,对去除banding的图像进行RGB域算法处理,从而提升图像质量。
例如,在RGB颜色空间中,可以先执行第一图像处理;再执行其他RGB颜色空间中的其他算法。
示例性地,第一颜色空间可以为YUV颜色空间;即第一图像处理可以在YUV颜色空间执行。
可选地,图像处理模型是通过以下方法训练得到的:
获取样本数据,样本数据包括第一样本图像、第二样本图像与第三样本图像,第二样本图像中包括第一样本图像的图像内容、条纹和/或偏色图像区域,第三样本图像与第一样本图像具有相同的图像内容,第三样本图像的图像质量高于第一样本图像的图像质量;
将第一样本图像与第二样本图像输入至待训练的图像处理模型,得到预测双边网格数据;
基于预测双边网格数据对第二样本图像进行插值处理,得到预测图像;
基于预测图像与第三样本图像之间的差异训练待训练的图像处理模型,得到图像处理模型。
步骤S370、显示或者保存第三图像。
示例性地,电子设备在运行其他应用时,该应用具有调用相机应用程序的权限;在其他应用调用相机应用程序获取图像时,可以显示第三图像。示例性地,在电子设备运行相机应用程序的拍摄模式(或者录像模式)时,在电子设备检测到指示拍照的操作(或者指示录像的操作)之后,可以保存第三图像。
在本申请的实施例中,在电子设备所处的拍摄环境的光源为频闪光源,由于电子设备需要拍摄运动物体从而降低曝光时间导致图像中出现banding条纹和/或偏色图像区域的情况下;在本申请的实施例中,通过以正常曝光图像为基准对短曝光图像进行颜色迁移与亮度迁移,得到颜色转换矩阵和/或亮度参数;基于颜色转换矩阵可以去除短曝光图像中局部图像区域的偏色问题;基于亮度参数可以去除短曝光图像中的banding条纹;确保在采集运动拍摄对象的运动瞬间图像时,去除图像中的banding(例如,去除banding条纹与偏色图像区域),提高图像质量。
此外,在本申请的实施例中,基于图像处理模型输出的双边网格得到颜色转换矩阵和/或亮度参数;通过图像处理模型对第一图像进行颜色迁移与亮度迁移时,能够识别第一图像与第二图像之间的图像内容差异部分;因此,通过图像处理模型得到的得到颜色转换矩阵和/或亮度参数,在对第一图像进行颜色调整和/或亮度调整时不会引入鬼影区域,提高图像质量。
图6是本申请实施例提供的一种图像信号处理器的处理流程的示意性流程图。该方法400包括可以由图1所示的电子设备执行;该方法400包括步骤S410至步骤S460,下面分别对步骤S410至步骤S460进行详细的描述。
步骤S410、图像传感器采集Raw图像。
应理解,Raw图像是指Raw颜色空间的图像;Raw图像可以是指从图像传感器中输出的未经过插值混色的原始的图像数据。
步骤S420、对Raw图像进行第一颜色空间转换处理。
示例性地,可以将Raw图像转换至RGB颜色空间,得到RGB图像。
步骤S430、RGB域算法处理。
可选地,本申请实施例提供的图像处理方法,即去除图像条带处理(例如,去除图像中的banding处理)可以是RGB域算法处理中第一个执行的算法;例如,步骤S430中对RGB图像进行RGB域算法处理包括:对RGB图像先执行去除图像条带处理,得到去除banding的RGB图像;再对去除banding的RGB图像执行其他RGB域算法处理。
可选地,本申请实施例提供的图像处理方法,即去除图像条带处理可以是RGB域算法处理中颜色相关的算法中第一个执行的算法。
在本申请的实施例中,在图像信号处理器对电子设备采集的Raw图像进行处理的过程中,在执行RGB域算法时,可以先执行去除图像条带处理,从而使得图像的颜色与亮度的准确性较高;在颜色与亮度的准确性较高的情况下,再执行其他RGB域算法能够确保在图像的颜色与亮度的准确性较高的基础上,对去除图像banding的图像进行RGB域算法处理,从而提升图像质量。
例如,步骤S430中对RGB图像进行RGB域算法处理包括:颜色相关算法处理与其他RGB域算法处理;在执行颜色相关算法处理时,先执行去除图像条带处理,得到去除banding的RGB图像;再对去除banding的RGB图像执行其他颜色相关的算法处理。
在本申请的实施例中,在执行颜色相关算法处理之前,可以先执行本申请实施例提供的图像处理方法,即执行去除图像条带处理;通过去除图像条带处理,能够得到去除banding的图像,使得图像的颜色与亮度的准确性较高;在颜色与亮度的准确性较高的情况下,再执行颜色相关算法处理,从而能够确保在图像的颜色与亮度的准确性较高的基础上,对去除banding的图像进行其他颜色相关算法处理,从而提升图像质量。
可选地,本申请实施例提供的图像处理方法,即去除图像条带处理可以不限定在RGB域算法中的执行时间。
可选地,去除图像条带处理的实现方式可以参见图7所示的步骤S510至步骤S570。
步骤S440、对处理后的RGB图像进行第二颜色空间转换处理。
示例性地,第二颜色空间可以是指YUV颜色空间;可以将处理后的RGB图像转换至YUV颜色空间;并执行YUV域算法处理。
步骤S450、YUV域算法处理。
示例性地,YUV域算法处理包括但不限于:
亮度降噪处理、边缘增强处理,对比度处理等。
步骤S460、输出处理后的图像。
应理解,上述步骤S410至步骤S460中以RGB域算法处理中包括去除图像条带处理进行举例说明;可选地,去除图像条带处理也可以在YUV域执行;换而言之,也可以在步骤S450中执行去除图像条带处理。
在本申请的实施例中,通过在RGB域算法处理增加去除图像条带处理;能够在电子设备所处的拍摄环境的光源为频闪光源,由于电子设备需要拍摄运动物体从而降低曝光时间导致图像中出现banding的情况下,去除短曝光图像中的banding;确保在采集运动拍摄对象的运动瞬间图像时,去除图像中的banding条纹与偏色图像区域,提高图像质量。
图7是本申请实施例提供的一种图像处理方法的示意性流程图。该方法500包括可以由图1所示的电子设备执行;该方法500包括步骤S501至步骤S508,下面分别对步骤S501至步骤S508进行详细的描述。
需要说明的是,在本申请的实施例中,电子设备所处的拍摄环境的光源为频闪光源。
应理解,图7中所示的方法500可以是指图6所示的去除图像条带处理的相关算法。
可选地,图7所示的方法500可以在图6所示的RGB域算法处理中的第一个执行的 算法。
在本申请的实施例中,在图像信号处理器对电子设备采集的Raw图像进行处理的过程中,在执行RGB域算法时,可以先执行图7所示的方法再执行其他RGB域算法;由于图7所示的图像处理方法能够得到去除banding的图像;换而言之,图7所示的方法能够对图像进行颜色与亮度的校正;在颜色与亮度的准确性较高的情况下,再执行其他RGB域算法能够确保在图像的颜色与亮度的准确性较高的基础上,对去除banding的图像进行RGB域算法处理,从而提升图像质量。
可选地,图7所示的方法500可以在图6所示的RGB域算法处理中所有的颜色相关的算法之前执行。
在本申请的实施例中,在执行颜色相关的算法之前,可以先执行图7所示的方法500;通过图7所示的图像处理方法,能够输出去除banding的图像;换而言之,图7所示的方法能够对图像进行颜色与亮度的校正;在颜色与亮度的准确性较高的情况下,再执行其他颜色相关的算法能够确保在图像的颜色与亮度的准确性较高的基础上,对去除banding的图像执行其他颜色处理,从而提升图像质量。
可选地,图7所示的方法500可以不限定在图6所示的RGB域算法中的执行。
可选地,图7所示的方法500可以在图6所示的YUV域算法中的执行;本申请对此不作任何限定。
步骤S501、运行相机应用程序。
例性地,用户可以通过单击“相机”应用程序的图标,指示电子设备运行相机应用;或者,电子设备处于锁屏状态时,用户可以通过在电子设备的显示屏上向右滑动的手势,指示电子设备运行相机应用。又或者,电子设备处于锁屏状态,锁屏界面上包括相机应用程序的图标,用户通过点击相机应用程序的图标,指示电子设备运行相机应用程序。又或者,电子设备在运行其他应用时,该应用具有调用相机应用程序的权限;用户通过点击相应的控件可以指示电子设备运行相机应用程序。例如,电子设备正在运行即时通信类应用程序时,用户可以通过选择相机功能的控件,指示电子设备运行相机应用程序等。
应理解,上述为对运行相机应用程序的操作的举例说明;还可以通过语音指示操作,或者其它操作的指示电子设备运行相机应用程序;本申请对此不作任何限定。
还应理解,运行相机应用程序可以是指启动相机应用程序。
步骤S502、检测到拍摄对象中包括运动对象。
示例性地,电子设备中可以包括检测模块,检测模块可以对拍摄对象进行检测;当拍摄对象中包括运动对象时,检测模块输出标识,该标识可以指示拍摄场景中包括运动对象。
可选地,运动对象可以是指运动的用户,移动的物体,或者,视频中播放的图像等。
步骤S503、检测到存在图像条带(banding)。
应理解,图像条带可以包括(banding)可以包括Luma banding与Chroma banding;其中,Luma banding是指由于缩短曝光时间产生的banding条纹;其中,Luma banding仅有明暗变换,无颜色改变;例如,如图10中的(a)所示的条纹;Chroma banding是指图像中的局部区域出现图像偏色,图像中的局部区域与图像整体颜色不符;例如,图像偏色可以包括出现红色,绿色或者蓝色的偏色等;例如,如图10中的(a)所示的图像区域704。
示例性地,电子设备中的防闪烁传感器(Flicker Sensor)可以用于检测是否存在图像 条带化;其中,防闪烁传感器(Flicker Sensor)可以为一种对环境光进行采样的传感器。
步骤S504、获取ISP处理后的短曝光图像(第一图像的一个示例)。
可选地,ISP处理后输出的短曝光图像可以是指以多帧Raw图像中的短曝光Raw图像为参考,得到的RGB图像。
应理解,由于短曝光图像的曝光时间较短,短曝光图像的曝光时间通常不满足10ms的整数倍;因此,短曝光图像中存在banding条纹。此外,由于在频闪光源的电压变化10%时,光源的色温会发生较大的变化(例如,1000K左右);图像的成像与色温相关,由于频闪光源的电压发生较小的变化时,色温会出现较大的变化,导致图像中出现偏色问题;因此,短曝光图像中可能还存在偏色图像区域。
可选地,可以先执行步骤S503再执行步骤S504;或者,也可以先执行步骤S504再执行步骤S503;或者,可以同时执行步骤S503与步骤S504;本申请对步骤S503与步骤S504的先后顺序不作任何限定。
可选地,可以获取全尺寸(full size)的短曝光Raw图像,对短曝光Raw图像进行ISP处理后得到的短曝光图像(例如,512*512大小的RGB图像)。
例如,相机模组中摄像头支持的最大分辨率为4096*2160,则获取的全尺寸的短曝光Raw图像的分辨率可以为4096*2160。
步骤S505、检测到指示拍照的第一操作。
可选地,如图11中的(b)所示,第一操作可以为点击拍照控件705的操作。
应理解,上述为对指示拍照的第一操作的举例说明;还可以通过语音指示操作,或者其它操作的指示电子设备拍照;本申请对此不作任何限定。
步骤S506、响应于第一操作,获取正常曝光图像(第二图像的一个示例)。
应理解,对于交流电为50HZ的频闪光源的拍摄场景中,正常曝光图像可以是指曝光时间为10ms的整数倍的Raw图像;即在采集正常曝光图像时,电子设备的曝光时间为10ms的整数倍。
应理解,由于正常曝光图像的曝光时间为10ms的整数倍;因此,正常曝光图像中不存在banding条纹;即图像中不存在Luma banding;此外,Chroma banding产生的原因在于:能量变化的同时,伴随色温的变化,导致图像中出现偏色问题;例如,对于Luma banding中明暗相间的条纹中呈现不同的颜色,则图像中出现Chroma banding;例如,暗条纹的位置偏红,亮条纹的位置偏蓝等;对于正常曝光图像而言,每一行像素接收的能量相同,因此正常曝光图像中不会存在能量波动,也不会存在偏色问题;因此,正常曝光图像中不会出现Luma banding与Chroma banding。
可选地,在本申请的实施例中,电子设备可以采集多帧Raw图像,多帧Raw图像中包括SANB;其中,SA表示A帧短曝光图像,NB表示B帧正常曝光图像;其中,A为大于或者等于1的整数,B为大于或者等于1的整数;短曝光图像的曝光时间小于10ms;正常曝光图像的曝光时间为10ms的整数倍。
可选地,在本申请的实施例中,电子设备可以采集多帧Raw图像,多帧Raw图像中包括SANBLC;其中,SA表示A帧短曝光图像,NB表示B帧正常曝光图像,LC表示C帧长曝光图像;其中,A为大于或者等于1的整数,B为大于或者等于1的整数,C为大于或者等于0的整数;短曝光图像的曝光时间小于10ms;正常曝光图像的曝光时间为10ms 的整数倍;长曝光图像的曝光时间大于正常曝光图像的曝光时间。
例如,多帧Raw图像可以是指S4N2L的7帧图像;即为SSSSNNL的Raw图像,其中,4帧S为预览帧中的短曝光Raw图像;2帧N表示曝光值大于或者等于4帧短曝光图像,曝光时间为10ms整数倍的正常曝光图像;1帧L表示长曝光图像。
可选地,可以对采集到的一帧或者多帧正常曝光的Raw图像中的第一帧正常曝光的Raw图像,进行颜色空间转换处理(例如,去马赛克处理)后得到RGB图像。
在本申请的实施例中,可以基于多帧图像中的第一帧正常曝光的Raw图像得到正常曝光图像;由于第一帧正常曝光的Raw图像与短曝光的Raw图像的时间差较短;因此,基于第一帧正常曝光的Raw图像得到正常曝光图像,并对短曝光图像进行颜色迁移处理与亮度迁移处理,能够在一定程度上避免引入运动鬼影。
步骤S507、将短曝光图像与正常曝光图像输入图像处理模型进行处理,得到双边网格。
应理解,双边网格是指一种数据结构;在本申请的实施例中,双边网格可以是一个网格矩阵;该网格矩阵中包括颜色校正转换矩阵与亮度参数(例如,亮度阶数),其中,颜色校正转换矩阵包括红色像素增益(R gain)、绿色像素增益(G gain)与蓝色像素增益(B gain);基于双边网格中的数据可以将正常曝光图像的颜色与亮度迁移至短曝光图像中,从而实现去除短曝光图像中的banding条纹与偏色图像区域。
示例性地,双边网格可以为一个32*32*8*9的网格矩阵;其中,32*32可以表示宽度与高度,8可以表示亮度阶数;9可以表示颜色校正转换矩阵,即作用于每一个RGB值的3*3矩阵。
可选地,图像处理模型的网络结构可以为卷积神经网络;例如,图像处理模型可以为HDR Net;图像处理模型的训练方法可以参见后续图8所示的相关描述。
步骤S508、基于双边网格对短曝光图像进行处理,得到去除条纹和/或局部偏色区域的图像(第三图像的一个示例)。
可选地,可以基于双边网格中的颜色转换矩阵对短曝光图像进行颜色迁移处理,去除短曝光图像中的偏色图像区域;和/或基于双边网格中的亮度参数对短曝光图像进行亮度迁移处理,从而实现去除短曝光图像中的banding条纹。
可选地,可以基于双边网格中的数据与短曝光图像进行插值处理,得到去除banding的图像。
例如,将双边网格中的数据与短曝光图像的像素矩阵进行相乘,得到去除banding的图像。
需要说明的是,上述步骤S506为在检测到指示拍照的第一操作之后,获取正常曝光图像;可选地,在一种实现方式中,在电子设备处于预览态时,即电子设备未检测到第一操作之前,电子设备可以采用单帧逐行HDR(Stagger HDR)技术获取短曝光图像与正常曝光图像。
应理解,stagger HDR是指以“行”作为输出单位的“长短帧”拍摄的技术。即采用时序上先后两次曝光获取正常曝光图像与短曝光图像。
需要说明的是,在图7所示的方法500中以短曝光图像的曝光时间小于10ms,正常曝光图像的曝光时间为10ms的整数倍进行举例说明;本申请对此不作任何限定。
可选地,在本申请的实施例中,短曝光图像的曝光时间、正常曝光图像的曝光时间与 电子设备所处的拍摄环境的频闪光源在每秒的亮暗次数相关;例如,正常曝光图像的曝光时间=(1000/频闪光源在每秒的亮暗次数)毫秒的整数倍。
示例性地,若拍摄环境的频闪光源的工作电压频率为50HZ,即频闪光源每秒钟灯光亮暗100次,则短曝光图像是指曝光时间小于10ms的图像;正常曝光图像是指曝光时间为10ms整数倍的图像。
示例性地,拍摄环境的频闪光源的工作电压频率为60HZ,即频闪光源每秒钟灯光亮暗120次,则短曝光图像是指曝光时间小于8.3ms的图像;正常曝光图像是指曝光时间为8.3ms整数倍的图像。
可选地,图7所示的步骤S501至步骤S508以在RGB颜色空间执行去除banding条纹与偏色图像区域进行举例说明;上述去除banding条纹与偏色图像区域的步骤也可以在YUV颜色空间执行;若在YUV颜色空间执行,则短曝光图像与正常曝光图像可以为YUV图像。
可选地,图7所示的步骤S501至步骤S508以电子设备拍照的场景进行举例描述;图7所示的方法也可以应用于电子设备录像的场景。
可选地,上述为基于图像处理模型得到双边网格进行举例说明;在本申请的实施例中,可以对第一图像与第二图像先进行配准处理与平滑处理;对配准处理后的第一图像与第二图像进行逐像素的作差,得到颜色转换矩阵和/或亮度参数。
在本申请的实施例中,在电子设备所处的拍摄环境的光源为频闪光源,由于电子设备需要拍摄运动物体从而降低曝光时间导致图像中出现banding条纹和/或偏色图像区域的情况下;在本申请的实施例中,通过以正常曝光图像为基准对短曝光图像进行颜色迁移与亮度迁移,得到颜色转换矩阵和/或亮度参数;基于颜色转换矩阵可以去除短曝光图像中局部图像区域的偏色问题;基于亮度参数可以去除短曝光图像中的banding条纹;确保在采集运动拍摄对象的运动瞬间图像时,去除图像中的banding(例如,去除banding条纹与偏色图像区域),提高图像质量。
此外,在本申请的实施例中,基于图像处理模型输出的双边网格得到颜色转换矩阵和/或亮度参数;通过图像处理模型对第一图像进行颜色迁移与亮度迁移时,能够识别第一图像与第二图像之间的图像内容差异部分;因此,通过图像处理模型得到的得到颜色转换矩阵和/或亮度参数,在对第一图像进行颜色调整和/或亮度调整时不会引入鬼影区域,提高图像质量。
图8是本申请实施例提供的一种图像处理模型的训练方法的示意性流程图。该方法600包括可以由图1所示的电子设备执行;该方法600包括步骤S610至步骤S650,下面分别对步骤S610至步骤S650进行详细的描述。
步骤S610、获取训练数据。
其中,训练数据中包括第一样本图像、第二样本图像与第三样本图像;其中,第一样本图像为无banding的图像;第二样本图像为在第一图像中增加banding后的图像;第三样本图像为无banding图像且第三样本图像的图像质量高于第一样本图像的图像质量。
例如,第三样本图像可以为讲经过ISP处理后且无banding的RGB图像;第一样本图像可以为基于正常曝光的样本Raw图像进行颜色空间转换后得到的RGB图像;因此,第三样本图像的图像质量高于第一样本图像的图像质量。
步骤S620、将第一样本图像与第二样本图像输入至待训练的图像处理模型,得到预测双边网格。
需要说明的是,图像处理模型用于学习相同拍摄场景中,无banding图像与有banding图像之间的颜色差异与亮度差异;目的在于通过输出的预测双边网格,使得能够使得将有banding图像中的颜色和亮度迁移至无banding图像中。
可选地,图像处理模型的网络结构可以VGG网络,或者,HDR Net。
步骤S630、基于预测双边网格对第一样本图像进行插值处理,得到预测图像。
示例性地,可以对双边网格中的数据与第一样本图像的数据进行相乘,得到预测图像。
步骤S640、基于预测图像与第三样本图像之间的差异,更新图像处理模型的参数得到训练后的图像处理模型。
示例性地,可以计算预测图像与第三样本图像中各个像素点之间的差异,通过反向传播算法训练待训练的图像处理模型的,使得待训练的图像处理模型的损失函数收敛,得到训练后的图像处理模型。
可选地,在本申请的实施例中,考虑到Chroma Banding图像处理模型可以包括8个参数;8个参数分别为A*3,a*1,b*1,c*3;其中,A表示幅值;a表示频率;b表示初始相位;c表示偏移项;A*3表示RGB像素分别对应的幅值;c*3表示RGB像素分别对应的偏移项。
在本申请的实施例中,通过图像处理模型能够得到将正常曝光时间或者长曝光时间得到的图像帧的颜色与亮度迁移至短曝光时间的图像帧的双边网格;在交流电为50HZ的频闪光源的拍摄场景中,由于正常曝光时间或者长曝光时间为10ms的整数倍,因此,长曝光图像与正常曝光图像中通常不存在图像条纹或者偏色图像区域;基于图像处理模型输出的双边网格得到颜色转换矩阵和/或亮度参数;通过图像处理模型对第一图像进行颜色迁移与亮度迁移时,能够识别第一图像与第二图像之间的图像内容差异部分;因此,通过图像处理模型得到的得到颜色转换矩阵和/或亮度参数,在对第一图像进行颜色调整和/或亮度调整时不会引入鬼影区域,提高图像质量。
下面结合图9至图13对在电子设备中的界面示意图进行举例描述。
图9是本申请实施例提供的一种电子设备的界面示意图。
在本申请的实施例中,在电子设备运行相机应用程序后电子设备中显示中的预览界面中显示的预览图像包括banding条纹和/或偏色区域;在电子设备检测到用户点击控件后,可以执行本申请实施例提供的图像处理方法,即执行去除图像条带的处理,在用户点击拍照控件,电子设备采集图像;该图像为去除banding条纹和/或偏色区域的图像,即为去除图像条带处理后输出的图像。
在一个示例中,如图9中的(b)所示电子设备检测到点击控件703的操作之后,执行本申请实施例提供的图像处理方法。
示例性地,如图9所示,图9中的(a)所示的图形用户界面(graphical user interface,GUI)为电子设备的桌面701;电子设备检测到用户点击桌面701上的相机应用程序的控件702,如图9中的(b)所示;在电子设备检测到用户点击桌面701上的相机应用程序的控件702之后,电子设备运行相机应用程序;例如,如图10中的(a)所示,电子设备可以显示拍照预览界面;拍照预览界面中包括预览图像与控件703,其中,预览图像中包括 明暗相间的条纹与图像区域704;图像区域704可以为红色、绿色、蓝色或者其他颜色;电子设备检测到用户点击控件703的操作,如图10中的(b)所示;当电子设备检测到用户点击控件703的操作之后,电子设备可以执行本申请实施例提供的图像处理方法,显示如图11中的(a)所示的预览界面;在预览界面中,包括拍照控件705;电子设备检测到用户点击拍照控件705的操作,如图11中的(b)所示;在电子设备检测到用户点击拍照控件705的操作之后,显示如图11中的(c)所示的显示界面,显示界面中包括相册控件706;电子设备检测到用户点击相册控件706中的操作,如图11中的(d)所示;当电子设备检测到用户点击相册控件706的操作之后,显示如图12所示的界面。
在一个示例中,如图13中的(d)所示电子设备检测到点击控件709的操作之后,执行本申请实施例提供的图像处理方法。
示例性地,电子设备中运行相机应用程序后,可以显示如图13中的(a)所示的预览界面;预览界面中包括预览图像与控件707,其中,预览图像中包括明暗相间的条纹与图像区域704,图像区域704可以为红色、绿色、蓝色或者其他颜色;电子设备检测到用户点击控件707的操作,如图13中的(b)所示;在电子设备检测到用户点击控件707的操作之后,显示设置界面,如图13中的(c)所示;设置界面中包括去除图像条带的控件709;电子设备检测到用户点击去除图像条带的控件709的操作,如图13中的(d)所示;在电子设备检测到用户点击去除图像条带的控件709的操作之后,执行本申请实施例提供的图像处理方法。
需要说明的是,上述为对电子设备中的显示界面的举例说明,本申请对此不作任何限定。
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。
上文结合图1至图13详细描述了本申请实施例提供的图像处理方法;下面将结合图14至图15详细描述本申请的装置实施例。应理解,本申请实施例中的装置可以执行前述本申请实施例的各种方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。
图14是本申请实施例提供的一种电子设备的结构示意图。该电子设备800包括处理模块810与显示模块820。
需要说明的是,电子设备所处的拍摄环境的光源为频闪光源。
其中,处理模块810用于运行所述电子设备的相机应用程序;显示模块820用于显示第一图像,所述第一图像为基于第一曝光时间采集的拍摄对象的图像,所述拍摄对象为运动对象,所述第一曝光时间小于第一时长,所述第一图像中包括条纹和/或偏色图像区域;处理模块810还用于检测到第一操作,所述第一操作指示所述电子设备拍摄或者录像;响应于所述第一操作,获取第二图像,所述第二图像为基于第二曝光时间采集的所述拍摄对象的图像,所述第二曝光时间为所述第一时长的整数倍;基于所述第一图像与所述第二图像,得到颜色转换矩阵和/或亮度参数,所述颜色转换矩阵用于对所述第一图像进行颜色调整,所述亮度参数用于对所述第一图像进行亮度调整;基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行第一图像处理,得到第三图 像,所述第三图像为去除所述条纹和/或偏色图像区域的图像;显示或者保存所述第三图像。
可选地,作为一个实施例,处理模块810还用于:
将所述第一图像与所述第二图像输入至图像处理模型,得到双边网格数据;其中,所述图像处理模型用于以所述第二图像为基准对所述第一图像进行颜色迁移处理与亮度迁移处理,所述双边网格数据包括所述颜色转换矩阵和/或所述亮度参数,所述第一图像的尺寸与所述第二图像的尺寸相同。
可选地,作为一个实施例,处理模块810具体用于:
基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行插值处理,得到所述第三图像。
可选地,作为一个实施例,处理模块810具体用于:
在第一颜色空间,基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行所述第一图像处理,得到处理后的图像;
在所述第一颜色空间,对所述处理后的图像进行第二图像处理,得到所述第三图像,所述第二图像处理为所述第一颜色空间中的颜色处理算法。
可选地,作为一个实施例,处理模块810还用于:对所述电子设备所处的拍摄场景进行检测,检测到所述运动对象;且检测到所述第一图像中存在所述条纹和/或偏色图像区域。
可选地,作为一个实施例,所述第一时长的大小为基于所述频闪光源在每秒的亮暗次数得到的。
可选地,作为一个实施例,所述第一时长=1000/所述频闪光源在每秒的亮暗次数。
可选地,作为一个实施例,所述频闪光源在每秒的亮暗次数与所述频闪光源的工作电压的频率关联。
可选地,作为一个实施例,所述图像处理模型为卷积神经网络。
可选地,作为一个实施例,所述图像处理模型是通过以下方法训练得到的:
获取样本数据,所述样本数据包括第一样本图像、第二样本图像与第三样本图像,所述第二样本图像中包括所述第一样本图像的图像内容、条纹和/或偏色图像区域,所述第三样本图像与所述第一样本图像具有相同的图像内容,所述第三样本图像的图像质量高于所述第一样本图像的图像质量;
将所述第一样本图像与所述第二样本图像输入至待训练的图像处理模型,得到预测双边网格数据;
基于所述预测双边网格数据对所述第二样本图像进行插值处理,得到预测图像;
基于预测图像与所述第三样本图像之间的差异训练所述待训练的图像处理模型,得到所述图像处理模型。
可选地,作为一个实施例,所述电子设备包括图像信号处理器,所述第一图像为所述图像信号处理器输出的图像。
可选地,作为一个实施例,所述第二图像为对所述电子设备采集的Raw图像进行第三图像处理得到的图像,所述第三图像处理包括颜色空间转换处理。
需要说明的是,上述电子设备800以功能模块的形式体现。这里的术语“模块”可以 通过软件和/或硬件形式实现,对此不作具体限定。
例如,“模块”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
图15示出了本申请提供的一种电子设备的结构示意图。图15中的虚线表示该单元或该模块为可选的;电子设备900可以用于实现上述方法实施例中描述的图像处理方法。
电子设备900包括一个或多个处理器901,该一个或多个处理器901可支持电子设备900实现方法实施例中的图像处理方法。处理器901可以是通用处理器或者专用处理器。例如,处理器901可以是中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件,如分立门、晶体管逻辑器件或分立硬件组件。
可选地,处理器901可以用于对电子设备900进行控制,执行软件程序,处理软件程序的数据。电子设备900还可以包括通信单元905,用以实现信号的输入(接收)和输出(发送)。
例如,电子设备900可以是芯片,通信单元905可以是该芯片的输入和/或输出电路,或者,通信单元905可以是该芯片的通信接口,该芯片可以作为终端设备或其它电子设备的组成部分。
又例如,电子设备900可以是终端设备,通信单元905可以是该终端设备的收发器,或者,通信单元905可以900中可以包括一个或多个存储器902,其上存有程序904,程序904可被处理器901运行,生成指令903,使得处理器901根据指令903执行上述方法实施例中描述的图像处理方法。
可选地,存储器902中还可以存储有数据。
可选地,处理器901还可以读取存储器902中存储的数据,该数据可以与程序904存储在相同的存储地址,该数据也可以与程序904存储在不同的存储地址。
可选地,处理器901和存储器902可以单独设置,也可以集成在一起,例如,集成在终端设备的系统级芯片(system on chip,SOC)上。
示例性地,存储器902可以用于存储本申请实施例中提供的图像处理方法的相关程序904,处理器901可以用于在执行图像处理方法时调用存储器902中存储的图像处理方法的相关程序904,执行本申请实施例的图像处理方法;例如,运行电子设备的相机应用程序;显示第一图像,第一图像为基于第一曝光时间采集的拍摄对象的图像,拍摄对象为运动对象,第一曝光时间小于第一时长,第一图像中包括条纹和/或偏色图像区域;检测到第一操作,第一操作指示电子设备拍摄或者录像;响应于第一操作,获取第二图像,第二图像为基于第二曝光时间采集的拍摄对象的图像,第二曝光时间为第一时长的整数倍;基于 第一图像与第二图像,得到颜色转换矩阵和/或亮度参数,颜色转换矩阵用于对第一图像进行颜色调整,亮度参数用于对第一图像进行亮度调整;基于颜色转换矩阵和/或亮度参数对第一图像进行第一图像处理,得到第三图像,第三图像为去除条纹和/或偏色图像区域的图像;显示或者保存第三图像。
可选地,本申请还提供了一种计算机程序产品,该计算机程序产品被处理器901执行时实现本申请中任一方法实施例中的图像处理方法。
例如,该计算机程序产品可以存储在存储器902中,例如是程序904,程序904经过预处理、编译、汇编和链接等处理过程最终被转换为能够被处理器901执行的可执行目标文件。
可选地,本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现本申请中任一方法实施例所述的图像处理方法。该计算机程序可以是高级语言程序,也可以是可执行目标程序。
例如,该计算机可读存储介质例如是存储器902。存储器902可以是易失性存储器或非易失性存储器,或者,存储器902可以同时包括易失性存储器和非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的电子设备的实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络 单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。
另外,本文中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准总之,以上所述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种图像处理方法,其特征在于,应用于电子设备,所述电子设备所处的拍摄环境的光源为频闪光源,所述图像处理方法包括:
    运行所述电子设备的相机应用程序;
    显示第一图像,所述第一图像为基于第一曝光时间采集的拍摄对象的图像,所述拍摄对象为运动对象,所述第一曝光时间小于第一时长,所述第一图像中包括条纹和/或偏色图像区域;
    检测到第一操作,所述第一操作指示所述电子设备拍摄或者录像;
    响应于所述第一操作,获取第二图像,所述第二图像为基于第二曝光时间采集的所述拍摄对象的图像,所述第二曝光时间为所述第一时长的整数倍;
    基于所述第一图像与所述第二图像,得到颜色转换矩阵和/或亮度参数,所述颜色转换矩阵用于对所述第一图像进行颜色调整,所述亮度参数用于对所述第一图像进行亮度调整;
    基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行第一图像处理,得到第三图像,所述第三图像为去除所述条纹和/或偏色图像区域的图像;
    显示或者保存所述第三图像。
  2. 如权利要求1所述的图像处理方法,其特征在于,还包括:
    将所述第一图像与所述第二图像输入至图像处理模型,得到双边网格数据;其中,所述图像处理模型用于以所述第二图像为基准对所述第一图像进行颜色迁移处理与亮度迁移处理,所述双边网格数据包括所述颜色转换矩阵和/或所述亮度参数,所述第一图像的尺寸与所述第二图像的尺寸相同。
  3. 如权利要求1或2所述的图像处理方法,其特征在于,所述基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行第一图像处理,得到第三图像,包括:
    基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行插值处理,得到所述第三图像。
  4. 如权利要求1至3中任一项所述图像处理方法,其特征在于,所述基于所述颜色转换矩阵和/或亮度参数对所述第一图像进行插值处理,得到所述第三图像,包括:
    在第一颜色空间,基于所述颜色转换矩阵和/或所述亮度参数对所述第一图像进行所述第一图像处理,得到处理后的图像;
    在所述第一颜色空间,对所述处理后的图像进行第二图像处理,得到所述第三图像,所述第二图像处理为所述第一颜色空间中的颜色处理算法。
  5. 如权利要求1至4中任一项所述的图像处理方法,其特征在于,所述获取第二图像之前,还包括:
    对所述电子设备所处的拍摄场景进行检测,检测到所述运动对象;且
    检测到所述第一图像中存在所述条纹和/或偏色图像区域。
  6. 如权利要求1至5中任一项所述的图像处理方法,其特征在于,所述第一时长的大小为基于所述频闪光源在每秒的亮暗次数得到的。
  7. 如权利要求6所述的图像处理方法,其特征在于,所述第一时长=1000/所述频闪光源在每秒的亮暗次数。
  8. 如权利要求7所述的图像处理方法,其特征在于,所述频闪光源在每秒的亮暗次数与所述频闪光源的工作电压的频率关联。
  9. 如权利要求2所述的图像处理方法,其特征在于,所述图像处理模型为卷积神经网络。
  10. 如权利要求2或9所述的图像处理方法,其特征在于,所述图像处理模型是通过以下方法训练得到的:
    获取样本数据,所述样本数据包括第一样本图像、第二样本图像与第三样本图像,所述第二样本图像中包括所述第一样本图像的图像内容、条纹和/或偏色图像区域,所述第三样本图像与所述第一样本图像具有相同的图像内容,所述第三样本图像的图像质量高于所述第一样本图像的图像质量;
    将所述第一样本图像与所述第二样本图像输入至待训练的图像处理模型,得到预测双边网格数据;
    基于所述预测双边网格数据对所述第二样本图像进行插值处理,得到预测图像;
    基于预测图像与所述第三样本图像之间的差异训练所述待训练的图像处理模型,得到所述图像处理模型。
  11. 如权利要求1至10中任一项所述的图像处理方法,其特征在于,所述电子设备包括图像信号处理器,所述第一图像为所述图像信号处理器输出的图像。
  12. 如权利要求1至11中任一项所述的图像处理方法,其特征在于,所述第二图像为对所述电子设备采集的Raw图像进行第三图像处理得到的图像,所述第三图像处理包括颜色空间转换处理。
  13. 一种电子设备,其特征在于,包括:
    一个或多个处理器和存储器;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求1至12中任一项所述的方法。
  14. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行如权利要求1至12中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储了计算机程序,当所述计算机程序被处理器执行时,使得处理器执行权利要求1至12中任一项所述的方法。
  16. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码被电子设备运行时,使得所述电子设备执行权利要求1至12中任一项所述的方法。
PCT/CN2023/114005 2022-09-15 2023-08-21 图像处理方法和电子设备 WO2024055816A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211123861.XA CN116744120B (zh) 2022-09-15 2022-09-15 图像处理方法和电子设备
CN202211123861.X 2022-09-15

Publications (1)

Publication Number Publication Date
WO2024055816A1 true WO2024055816A1 (zh) 2024-03-21

Family

ID=87917415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114005 WO2024055816A1 (zh) 2022-09-15 2023-08-21 图像处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN116744120B (zh)
WO (1) WO2024055816A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201930B (zh) * 2023-11-08 2024-04-16 荣耀终端有限公司 一种拍照方法和电子设备
CN117692786A (zh) * 2024-02-01 2024-03-12 荣耀终端有限公司 一种拍摄方法、电子设备和存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110205381A1 (en) * 2007-03-05 2011-08-25 Tessera Technologies Ireland Limited Tone mapping for low-light video frame enhancement
CN103379288A (zh) * 2012-04-12 2013-10-30 索尼公司 图像处理设备、图像处理方法和程序
CN111209775A (zh) * 2018-11-21 2020-05-29 杭州海康威视数字技术股份有限公司 信号灯图像处理方法、装置、设备及存储介质
CN111491111A (zh) * 2020-04-20 2020-08-04 Oppo广东移动通信有限公司 高动态范围图像处理系统及方法、电子设备和可读存储介质
US20200396369A1 (en) * 2018-03-29 2020-12-17 Fujifilm Corporation Image capturing apparatus, image capturing method, and program
US20210035272A1 (en) * 2019-07-30 2021-02-04 Qualcomm Incorporated Image banding correction in high dynamic range imaging
CN112738414A (zh) * 2021-04-06 2021-04-30 荣耀终端有限公司 一种拍照方法、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7835569B2 (en) * 2006-10-13 2010-11-16 Apple Inc. System and method for raw image processing using conversion matrix interpolated from predetermined camera characterization matrices
JP6677100B2 (ja) * 2016-06-24 2020-04-08 コニカミノルタ株式会社 放射線画像撮影システム
WO2020148868A1 (ja) * 2019-01-17 2020-07-23 三菱電機株式会社 情報処理装置、情報処理方法及び情報処理プログラム
CN111757015B (zh) * 2019-03-28 2021-12-03 杭州海康威视数字技术股份有限公司 一种曝光控制方法、装置及电子设备
CN110958401B (zh) * 2019-12-16 2022-08-23 北京迈格威科技有限公司 一种超级夜景图像颜色校正方法、装置和电子设备
CN114513609B (zh) * 2020-11-17 2024-05-10 浙江大华技术股份有限公司 相机曝光时间的调节方法、图像频闪检测方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110205381A1 (en) * 2007-03-05 2011-08-25 Tessera Technologies Ireland Limited Tone mapping for low-light video frame enhancement
CN103379288A (zh) * 2012-04-12 2013-10-30 索尼公司 图像处理设备、图像处理方法和程序
US20200396369A1 (en) * 2018-03-29 2020-12-17 Fujifilm Corporation Image capturing apparatus, image capturing method, and program
CN111209775A (zh) * 2018-11-21 2020-05-29 杭州海康威视数字技术股份有限公司 信号灯图像处理方法、装置、设备及存储介质
US20210035272A1 (en) * 2019-07-30 2021-02-04 Qualcomm Incorporated Image banding correction in high dynamic range imaging
CN111491111A (zh) * 2020-04-20 2020-08-04 Oppo广东移动通信有限公司 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN112738414A (zh) * 2021-04-06 2021-04-30 荣耀终端有限公司 一种拍照方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN116744120A (zh) 2023-09-12
CN116744120B (zh) 2024-04-12

Similar Documents

Publication Publication Date Title
WO2024055816A1 (zh) 图像处理方法和电子设备
US11532076B2 (en) Image processing method, electronic device and storage medium
WO2017215501A1 (zh) 图像降噪处理方法、装置及计算机存储介质
CN110198417A (zh) 图像处理方法、装置、存储介质及电子设备
WO2024045670A1 (zh) 生成高动态范围视频的方法和电子设备
WO2021190613A1 (zh) 一种拍照方法及装置
CN110266954A (zh) 图像处理方法、装置、存储介质及电子设备
WO2024031879A1 (zh) 显示动态壁纸的方法和电子设备
CN110198418A (zh) 图像处理方法、装置、存储介质及电子设备
WO2023130922A1 (zh) 图像处理方法与电子设备
CN110198419A (zh) 图像处理方法、装置、存储介质及电子设备
WO2023040725A1 (zh) 白平衡处理方法与电子设备
WO2023077939A1 (zh) 摄像头的切换方法、装置、电子设备及存储介质
WO2022083325A1 (zh) 拍照预览方法、电子设备以及存储介质
CN117135293B (zh) 图像处理方法和电子设备
CN110278375A (zh) 图像处理方法、装置、存储介质及电子设备
CN115633262B (zh) 图像处理方法和电子设备
WO2023219466A1 (en) Methods and systems for enhancing low light frame in a multi camera system
WO2023060921A1 (zh) 图像处理方法与电子设备
WO2023124202A1 (zh) 图像处理方法与电子设备
CN115767290B (zh) 图像处理方法和电子设备
WO2023124201A1 (zh) 图像处理方法与电子设备
CN110266967A (zh) 图像处理方法、装置、存储介质及电子设备
JP2013062711A (ja) 撮影装置、撮影画像処理方法、およびプログラム
WO2021154807A1 (en) Sensor prioritization for composite image capture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23864549

Country of ref document: EP

Kind code of ref document: A1