WO2023011197A1 - 图像处理方法、电子设备、以及计算机可读存储介质 - Google Patents

图像处理方法、电子设备、以及计算机可读存储介质 Download PDF

Info

Publication number
WO2023011197A1
WO2023011197A1 PCT/CN2022/106966 CN2022106966W WO2023011197A1 WO 2023011197 A1 WO2023011197 A1 WO 2023011197A1 CN 2022106966 W CN2022106966 W CN 2022106966W WO 2023011197 A1 WO2023011197 A1 WO 2023011197A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image data
camera
image
luminance
Prior art date
Application number
PCT/CN2022/106966
Other languages
English (en)
French (fr)
Inventor
郑亮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP22851911.2A priority Critical patent/EP4358019A1/en
Publication of WO2023011197A1 publication Critical patent/WO2023011197A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Definitions

  • the embodiments of the present application relate to the technical field of terminals, and in particular, to an image processing method, electronic equipment, and a computer-readable storage medium.
  • Multi-camera is the core purchase point of a camera, and multi-camera collaborative photography technology is developing rapidly, which can greatly improve the satisfaction of shooting. There is a problem that the switching or zooming is not smooth enough when digital zooming or camera switching is performed in multi-camera collaborative shooting.
  • an embodiment of the present application provides an image processing method, including: acquiring first image data corresponding to a first image captured by a first camera, and acquiring second image data corresponding to a second image captured by a second camera;
  • the first chromaticity data of the first image data is better than the second chromaticity data of the second image data
  • the first luminance data of the second image data is better than the second chromaticity data of the first image data.
  • an embodiment of the present application provides an electronic device, including: at least one processor; and a memory, on which at least one computer program is stored, and when the at least one computer program is executed by the at least one processor, Implement the image processing method described above.
  • an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the foregoing image processing method is implemented.
  • Fig. 1 is the flowchart of the image processing method that the embodiment of the present application provides
  • Fig. 2 is a schematic diagram of the first image data provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of second image data provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the third chromaticity data provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of the fourth chromaticity data provided by the embodiment of the present application.
  • Fig. 6 is a schematic diagram of the first luminance data provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of fused image data provided by an embodiment of the present application.
  • FIG. 8 is a block diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 9 is a block diagram of a terminal provided by an embodiment of the present application.
  • FIG. 10 is a composition block diagram of an electronic device provided by an embodiment of the present application.
  • the built-in 3A algorithm is mainly used at the image signal processing (ISP, Image Signal Processing) end to calculate the automatic white balance (AWB, Auto White Balance) and automatic exposure of each camera.
  • AVB Automatic White Balance
  • AE Automatic Exposure
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method, which is applicable to a terminal including two or more cameras.
  • the cameras included in the terminal may include but not limited to the following combinations: super wide-angle camera + wide-angle Camera (main camera), wide-angle camera (main camera) + telephoto camera, wide-angle camera (main camera) + periscope camera, ultra-wide-angle camera + wide-angle camera (main camera) + telephoto camera, ultra-wide-angle camera + wide-angle camera ( main camera) + periscope camera, and super wide-angle camera + wide-angle camera (main camera) + telephoto camera + periscope camera.
  • the image processing method includes steps 100 to 102 .
  • Step 100 Obtain first image data corresponding to the first image captured by the first camera, and acquire second image data corresponding to the second image captured by the second camera.
  • the first camera can be used to capture the first image to obtain the first image data
  • the second camera can be used to capture the second image to obtain the second image data.
  • the first camera may be used to capture the first image to obtain the first image data
  • the second camera may be used to capture the second image to obtain the second image data.
  • the first image is captured by the first camera to obtain the first image data , using the second camera to capture the second image to obtain the second image data.
  • the preset conditions include at least one of the following:
  • the first chrominance data of the first image data is better than the second chrominance data of the second image data, and the first luminance data of the second image data is better than the second luminance data of the first image data;
  • the second luminance data of the first image data is better than the first luminance data of the second image data, and the second chrominance data of the second image data is better than the first chrominance data of the first image data;
  • the color deviation of the second camera is greater than that of the first camera; the color deviation refers to the obvious difference visible to the naked eye in the same scene;
  • the time required to adjust the color state of the second image data to be the same as the color state of the first image data is greater than or equal to a preset time
  • the brightness variation of the second camera is greater than the brightness variation of the first camera
  • the time required to adjust the brightness state of the second image data to be the same as that of the first image data is greater than or equal to a preset time.
  • the first chromaticity data of the first image data is better than the second chromaticity data of the second image data includes at least one of the following:
  • the accuracy of the first chrominance data of the first image data is greater than the accuracy of the second chrominance data of the second image data
  • the signal-to-noise ratio of the first chrominance data of the first image data is greater than the signal-to-noise ratio of the second chrominance data of the second image data.
  • the first luminance data of the second image data is better than the second luminance data of the first image data including at least one of the following:
  • the accuracy of the first luminance data of the second image data is greater than the accuracy of the second luminance data of the first image data
  • the signal-to-noise ratio of the first luminance data of the second image data is greater than the signal-to-noise ratio of the second luminance data of the first image data.
  • the digital zoom range of the camera is limited.
  • the focal length of the digital zoom exceeds the digital zoom range of the camera, it is necessary to switch the camera.
  • the camera whose digital zoom range is adjacent to the digital zoom range of the current camera that is, the camera with the smallest difference between the digital zoom range of the current camera.
  • the digital zoom range of camera 1 is 1 to 5 times digital zoom
  • the digital zoom range of camera 2 is 6 to 10 times digital zoom.
  • Camera 1 is currently working and the focal length is 3 times.
  • the user digitally zooms to change the focal length to 7 times, the focal length after zooming exceeds the digital zoom range of camera 1, switch to camera 2 to work, and the focal length becomes 7 times.
  • the first camera is a camera before switching
  • the second camera is a camera after switching
  • the first camera is a camera before digital zoom
  • the second camera is a camera after digital zoom
  • the first camera is a camera currently performing digital zoom
  • the second camera is a camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the second camera is a camera before switching, and the first camera is a camera after switching;
  • the second camera is a camera before digital zoom
  • the first camera is a camera after digital zoom
  • the second camera is a camera currently performing digital zoom
  • the first camera is a camera with the smallest difference between a digital zoom range and a digital zoom range of the first camera.
  • the first image data and the second image data may or may not be captured at the same time.
  • some scenes in the first image data and the second image data must overlap, that is, part of the image data in the first image data and the second image data are taken from the same scene.
  • Step 101 when the first chromaticity data of the first image data is better than the second chromaticity data of the second image data, and the first luminance data of the second image data is better than the second luminance data of the first image data
  • fusion processing is performed on the first image data and the second image data according to the first chrominance data and the first brightness data to obtain fusion image data.
  • the quality of the chrominance data and brightness data of the image data is determined by the hardware of the camera itself and the image algorithm, the quality of the sensor and the lens, etc. It determines the quality of image data in RAW format, and the image algorithm determines the quality of images in YUV or YCbCr format; the image data in RAW format is the most original image data collected by the camera, and Y indicates that the image data in RAW format is processed by the image algorithm.
  • Luminance data of the processed image data obtained after processing, and UV and CbCr represent chrominance data of the processed image data obtained after processing the image data in RAW format by using an image algorithm.
  • the angle of view of the first image data is greater than the angle of view of the second image data; and, the first camera is a camera before switching, and the second camera is a camera after switching; or, the first camera is a digital camera
  • the first camera is a camera before zooming, the second camera is a camera after digital zooming; or, the first camera is a camera currently performing digital zooming, and the second camera is a camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the fusion processing of the first image data and the second image data according to the first chrominance data and the first brightness data to obtain the fusion image data includes: when the field angle of the first image data is larger than In the case of the field of view of the second image data, the third chromaticity data is obtained from the first chromaticity data; the obtained third chromaticity data is the field of view in the first chromaticity data and the field of view of the first luminance data Chromaticity data with the same angle; scaling the third chromaticity data to obtain fourth chromaticity data with the same size as the first luminance data; and combining the fourth chromaticity data and the first luminance data to obtain fused image data .
  • the first image data is shown in Figure 2
  • the second image data is shown in Figure 3
  • the third chromaticity data is shown in Figure 4
  • the fourth chromaticity data is shown in Figure 5
  • the first luminance data is shown in Figure 5.
  • the final fused image data is shown in Figure 7.
  • the fourth chromaticity data obtained after enlarging the third chromaticity data is relatively blurred.
  • the fused image data obtained after combining the fourth chromaticity data and the first luminance data is clearer than the fourth chromaticity data, while retaining the color of the fourth chromaticity data, thus making the camera switching process easier smooth.
  • the acquiring the third chromaticity data from the first chromaticity data includes: determining the third chromaticity data according to the field angle of the first image data and the field angle of the second image data size; determine the first position of the center of the second image data in the first image data; and obtain the third chromaticity data from the first chromaticity data according to the size and the first position of the third chromaticity data.
  • the determining the size of the third chromaticity data according to the field angle of the first image data and the field angle of the second image data includes: according to the field angle of the first image data and the second The viewing angle of the image data determines the first scaling factor; and the size of the third chrominance data is determined according to the first scaling factor.
  • the determining the first scaling factor according to the field angle of the first image data and the field angle of the second image data includes: according to the formula Calculate the first zoom factor; Fk1 is the first zoom factor, FOV 2 is the field angle of the second image data, and FOV 1 is the field angle of the first image data.
  • the determining the size of the third chroma data according to the first scaling factor includes: determining the length of the third chroma data is h1 ⁇ Fk1, and the width is w1 ⁇ Fk1; Fk1 is the first scaling factor , h1 is the length of the first chroma data, w1 is the width of the first chroma data.
  • the determining the first position of the center of the second image data in the first image data includes: determining the first position according to a pre-stored offset between the first camera and the second camera .
  • the offset between the first camera and the second camera is the offset between the center of the first image data and the center of the second image data.
  • the determining the first position of the center of the second image data in the first image data includes: searching for the position with the smallest shooting deviation in the preset central neighborhood in the first image data as the first position a location.
  • the shooting deviation of the overlapping portion of the first image data and the second image data is calculated, and the position of the pixel corresponding to the minimum shooting deviation is the first position.
  • ⁇ E is the shooting deviation
  • h2 is the length of the overlapping part of the first image data and the second image data
  • w2 is the width of the overlapping part of the first image data and the second image data
  • D 1 (i, j) is the first The pixel value of row i and column j of the image data overlapping with the second image data in the image data
  • D 2 (i, j) is the i-th row of the image data overlapping with the first image data in the second image data
  • the image data of a partial area of the overlapping portion may be intercepted for calculation.
  • other algorithms such as Oriented Fast and Rotated Brief (ORB) algorithm and BRISK (Binary Robust Independent Elementary Features) algorithm can be used to achieve image alignment operation.
  • the acquiring the third chromaticity data from the first chromaticity data according to the size of the third chromaticity data and the first position includes: centering on the first position from the first chromaticity data Chromaticity data having a size equal to the size of the third chromaticity data is acquired as third chromaticity data.
  • the angle of view of the first image data is smaller than the angle of view of the second image data; and, the second camera is the camera before switching, and the first camera is the camera after switching; or, the second camera is the camera before digital zooming
  • the first camera is a camera after digital zooming; or, the second camera is a camera currently performing digital zooming, and the first camera is a camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the fusion processing of the first image data and the second image data according to the first chrominance data and the first brightness data to obtain the fusion image data includes: when the field angle of the first image data is smaller than In the case of the viewing angle of the second image data, the fourth brightness data is obtained from the first brightness data; the obtained fourth brightness data is the same as the viewing angle of the first brightness data and the first chromaticity data. luminance data; performing scaling processing on the fourth luminance data to obtain fifth luminance data having the same size as the first chrominance data; and combining the fifth luminance data and the first chrominance data to obtain fused image data.
  • the acquiring the fourth brightness data from the first brightness data includes: determining the size of the fourth brightness data according to the field angle of the first image data and the field angle of the second image data; determining The center of the first image data is at a second position in the second image data; and the fourth brightness data is obtained from the first brightness data according to the size and the second position of the fourth brightness data.
  • the determining the size of the fourth brightness data according to the field angle of the first image data and the field angle of the second image data includes: according to the field angle of the first image data and the field angle of the second image data The field angle of the data determines the second scaling factor; and the size of the fourth brightness data is determined according to the second scaling factor.
  • the determining the second scaling factor according to the field angle of the first image data and the field angle of the second image data includes: according to the formula Calculate the second scaling factor; Fk2 is the second scaling factor, FOV 2 is the field angle of the second image data, and FOV 1 is the field angle of the first image data.
  • the determining the size of the fourth brightness data according to the second scaling factor includes: determining the length of the fourth brightness data is h3 ⁇ Fk2, and the width is w3 ⁇ Fk2; Fk2 is the second scaling factor, h3 is the length of the first brightness data, and w3 is the width of the first brightness data.
  • the determining the second position of the center of the first image data in the second image data includes: determining the second position according to a pre-stored offset between the first camera and the second camera .
  • the offset between the first camera and the second camera is the offset between the center of the first image data and the center of the second image data.
  • the determining the second position of the center of the first image data in the second image data includes: searching for the position with the smallest shooting deviation in the preset central neighborhood in the second image data as the second position Second position.
  • the shooting deviation of the overlapping portion of the first image data and the second image data is calculated, and the position of the pixel corresponding to the minimum shooting deviation is the second position.
  • ⁇ E is the shooting deviation
  • h2 is the length of the overlapping part of the first image data and the second image data
  • w2 is the width of the overlapping part of the first image data and the second image data
  • D 1 (i, j) is the first The pixel value of row i and column j of the image data overlapping with the second image data in the image data
  • D 2 (i, j) is the i-th row of the image data overlapping with the first image data in the second image data
  • the image data of a partial area of the overlapping portion may be intercepted for calculation.
  • other algorithms such as ORB algorithm and BRISK algorithm can be used to realize the operation of image alignment.
  • the acquiring the fourth brightness data from the first brightness data according to the size and the second position of the fourth brightness data includes: acquiring the fourth brightness data with a size of the second position centered on the second position from the first brightness data The luminance data of the size of four luminance data is used as the fourth luminance data.
  • the angle of view of the first image data is greater than the angle of view of the second image data; and, the second camera is the camera before switching, and the first camera is the camera after switching; or, the second camera is a digital camera
  • the first camera is the camera after digital zooming; or, the second camera is the camera currently performing digital zooming, and the first camera is the camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the fusion processing of the first image data and the second image data according to the first chrominance data and the first brightness data to obtain the fusion image data includes: when the field angle of the first image data is larger than In the case of the viewing angle of the second image data, the third image data is acquired from the first image data; the acquired third image data is an image having the same viewing angle in the first image data as the viewing angle of the second image data data; the second image data is scaled to obtain fourth image data with the same size as the third image data; the fifth chroma data in the third image data and the third luminance data in the fourth image data are combined obtaining fifth image data; and replacing the third image data in the first image data with the fifth image data to obtain fused image data.
  • the acquiring the third image data from the first image data includes: determining the size of the third image data according to the field angle of the first image data and the field angle of the second image data; determining The center of the second image data is at a first position in the first image data; and the third image data is obtained from the first image data according to the size and the first position of the third image data.
  • the determining the size of the third image data according to the field angle of the first image data and the field angle of the second image data includes: according to the field angle of the first image data and the field angle of the second image data The viewing angle determines a first scaling factor; and the size of the third image data is determined according to the first scaling factor.
  • the determining the first scaling factor according to the field angle of the first image data and the field angle of the second image data includes: according to the formula Calculate the first zoom factor; Fk1 is the first zoom factor, FOV 2 is the field angle of the second image data, and FOV 1 is the field angle of the first image data.
  • the determining the size of the third image data according to the first scaling factor includes: determining that the length of the third image data is h4 ⁇ Fk1, and the width is w4 ⁇ Fk1; Fk1 is the first scaling factor, h4 is the length of the first image data, w4 is the width of the first image data.
  • the determining the first position of the center of the second image data in the first image data includes: determining the first position according to a pre-stored offset between the first camera and the second camera .
  • the offset between the first camera and the second camera is the offset between the center of the first image data and the center of the second image data.
  • the determining the first position of the center of the second image data in the first image data includes: searching for the position with the smallest shooting deviation in the preset central neighborhood in the first image data as the first position a location.
  • the shooting deviation of the overlapping portion of the first image data and the second image data is calculated, and the position of the pixel corresponding to the minimum shooting deviation is the first position.
  • ⁇ E is the shooting deviation
  • h2 is the length of the overlapping part of the first image data and the second image data
  • w2 is the width of the overlapping part of the first image data and the second image data
  • D 1 (i, j) is the first The pixel value of row i and column j of the image data overlapping with the second image data in the image data
  • D 2 (i, j) is the i-th row of the image data overlapping with the first image data in the second image data
  • the image data of a partial area of the overlapping portion may be intercepted for calculation.
  • other algorithms such as ORB algorithm and BRISK algorithm can be used to realize the operation of image alignment.
  • the acquiring the third image data from the first image data according to the size of the third image data and the first position includes: acquiring the third image data with a size of the first position centered on the first image data from the first image data The image data of the size of three image data is used as the third image data.
  • the angle of view of the first image data is smaller than the angle of view of the second image data; and, the first camera is a camera before switching, and the second camera is a camera after switching; or, the first camera is a digital camera
  • the camera before zooming, the second camera is a camera after digital zooming; or, the first camera is a camera currently performing digital zooming, and the second camera is a camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the fusion processing of the first image data and the second image data according to the first chrominance data and the first brightness data to obtain the fusion image data includes: when the field angle of the first image data is smaller than In the case of the viewing angle of the second image data, the sixth image data is obtained from the second image data; the obtained sixth image data is an image having the same viewing angle as that of the first image data in the second image data data; performing scaling processing on the first image data to obtain seventh image data having the same size as the sixth image data; combining sixth chrominance data in the seventh image data and sixth luminance data in the sixth image data obtaining eighth image data; and replacing the sixth image data in the second image data with the eighth image data to obtain fused image data.
  • the acquiring the sixth image data from the second image data includes: determining the size of the sixth image data according to the field angle of the first image data and the field angle of the second image data; determining The center of the first image data is at a second position in the second image data; and the sixth image data is acquired from the second image data according to the size and the second position of the sixth image data.
  • the determining the size of the sixth image data according to the field angle of the first image data and the field angle of the second image data includes: according to the field angle of the first image data and the field angle of the second image data The field angle of the data determines a second scaling factor; and the size of the sixth image data is determined according to the second scaling factor.
  • determining the second scaling factor according to the field angle of the first image data and the field angle of the second image data includes: according to the formula Calculate the second scaling factor; Fk2 is the second scaling factor, FOV 2 is the field angle of the second image data, and FOV 1 is the field angle of the first image data.
  • the determining the size of the sixth image data according to the second scaling factor includes: determining the length of the sixth image data is h4 ⁇ Fk2, and the width is w4 ⁇ Fk2; Fk2 is the second scaling factor, h4 is the length of the second image data, and w4 is the width of the second image data.
  • the determining the second position of the center of the first image data in the second image data includes: determining the second position according to a pre-stored offset between the first camera and the second camera .
  • the offset between the first camera and the second camera is the offset between the center of the first image data and the center of the second image data.
  • the determining the second position of the center of the first image data in the second image data includes: searching for the position with the smallest shooting deviation in the preset central neighborhood in the second image data as the second position Second position.
  • the shooting deviation of the overlapping portion of the first image data and the second image data is calculated, and the position of the pixel corresponding to the minimum shooting deviation is the second position.
  • ⁇ E is the shooting deviation
  • h2 is the length of the overlapping part of the first image data and the second image data
  • w2 is the width of the overlapping part of the first image data and the second image data
  • D 1 (i, j) is the first The pixel value of row i and column j of the image data overlapping with the second image data in the image data
  • D 2 (i, j) is the i-th row of the image data overlapping with the first image data in the second image data
  • the image data of a partial area of the overlapping portion may be intercepted for calculation.
  • other algorithms such as ORB algorithm and BRISK algorithm can be used to realize the operation of image alignment.
  • the acquiring the sixth image data from the second image data according to the size and the second position of the sixth image data includes: acquiring the sixth image data with a size of the second position centered on the second image data from the second image data The image data of the size of six image data is used as the sixth image data.
  • the first camera is the camera before the switch, and the second camera is the camera after the switch; or, the first camera is the camera before the digital zoom, and the second camera is the camera after the digital zoom; or, the first camera is the current camera
  • the second camera is the camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the performing fusion processing on the first image data and the second image data according to the first chrominance data and the first brightness data to obtain the fusion image data includes:
  • the sixth chromaticity data is obtained from the first chromaticity data
  • the fourth luminance data is obtained from the first luminance data
  • the viewing angle of the sixth chromaticity data and the viewing angle of the fourth luminance data same field angle
  • the fused image data is obtained by replacing the image data in the second image data with the same position and size as the fourth brightness data with the seventh image data.
  • the positions of the sixth chrominance data and the fourth luminance data may be preset, or may be selected and determined by a user.
  • ORB algorithm When there is distortion in the image data, in order to ensure the accuracy, other algorithms such as ORB algorithm and BRISK algorithm can be used to realize the operation of image alignment.
  • the second camera is the camera before the switch, and the first camera is the camera after the switch; or, the second camera is the camera before the digital zoom, and the first camera is the camera after the digital zoom; or, the second camera is the current camera As for cameras that perform digital zoom, the first camera is the camera with the smallest difference between the digital zoom range and the digital zoom range of the second camera.
  • the performing fusion processing on the first image data and the second image data according to the first chrominance data and the first brightness data to obtain the fusion image data includes:
  • the sixth chromaticity data is obtained from the first chromaticity data
  • the fourth luminance data is obtained from the first luminance data
  • the viewing angle of the sixth chromaticity data and the viewing angle of the fourth luminance data same field angle
  • the fused image data is obtained by replacing the image data in the first image data with the same position and size as the sixth chrominance data with the eighth image data.
  • ORB algorithm When there is distortion in the image data, in order to ensure the accuracy, other algorithms such as ORB algorithm and BRISK algorithm can be used to realize the operation of image alignment.
  • Step 102 displaying the fused image data.
  • the first chromaticity data of the first image data is better than the second chromaticity data of the second image data, and the first luminance data of the second image data
  • the fused image data improves the clarity of the fused image data while retaining the chrominance data of the first image data, and improves the smoothness of the switching or zooming process during digital zooming or camera switching.
  • the embodiment of the present application provides an electronic device, as shown in FIG. 10 , including:
  • At least one processor 1001 (only one is shown in FIG. 10 );
  • the processor 1001 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.; the memory 1002 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically SDRAM, DDR etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH).
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH flash memory
  • the processor 1001 and the memory 1002 are connected to each other through a bus, and then connected to other components of the computing device.
  • the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above image processing method is realized.
  • FIG. 8 is a block diagram of an image processing device provided by an embodiment of the present application.
  • an embodiment of the present application provides an image processing device, which includes an image data acquisition module 801 , an image data processing module 802 and a display module 803 .
  • the image data acquisition module 801 is configured to acquire first image data corresponding to the first image captured by the first camera, and acquire second image data corresponding to the second image captured by the second camera.
  • the image data processing module 802 is configured such that when the first chrominance data of the first image data is better than the second chrominance data of the second image data, and the first luminance data of the second image data is better than the first chrominance data of the second image data In the case of the second luminance data of the first image data, performing fusion processing on the first image data and the second image data according to the first chrominance data and the first luminance data to obtain fused image data .
  • a display module 803 configured to display the fused image data.
  • the image data processing module 802 is configured to implement the pairing of the first image data and the second image data according to the first chrominance data and the first luminance data in the following manner: Perform fusion processing to obtain fused image data: when the viewing angle of the first image data is larger than the viewing angle of the second image data, obtain third chromaticity data from the first chromaticity data; The third chromaticity data is chromaticity data in the first chromaticity data whose field angle is the same as that of the first luminance data; performing scaling processing on the third chromaticity data to obtain a size equal to that of the first chromaticity data fourth chroma data having the same size as the first luminance data; and combining the fourth chroma data and the first luminance data to obtain the fused image data.
  • the image data processing module 802 is configured to acquire third chromaticity data from the first chromaticity data in the following manner: according to the field angle of the first image data and the first chromaticity data The field angle of the second image data determines the size of the third chromaticity data; determines the first position of the center of the second image data in the first image data; and according to the third chromaticity data The size and determined first location obtains said third chromaticity data from said first chromaticity data.
  • the image data processing module 802 is configured to implement the determination of the third chromaticity data according to the field angle of the first image data and the field angle of the second image data in the following manner Size: determining a first scaling factor according to the viewing angle of the first image data and the viewing angle of the second image data; and determining the size of the third chrominance data according to the first scaling factor.
  • the first scaling factor is determined according to the field angle of the first image data and the field angle of the second image data: according to the formula Calculate the first scaling factor; Fk1 is the first scaling factor, FOV 2 is the field angle of the second image data, and FOV 1 is the field angle of the first image data.
  • the image data processing module 802 is configured to determine the size of the third chroma data according to the first scaling factor in the following manner: determine the length of the third chroma data to be h1 ⁇ Fk1, width is w1 ⁇ Fk1; Fk1 is the first scaling factor, h1 is the length of the first chroma data, w1 is the width of the first chroma data.
  • the image data processing module 802 is configured to determine the position of the center of the second image data in the first image data in the following manner: according to the pre-stored first camera and the The offset between the second cameras determines the position of the center of the second image data in the first image data.
  • the image data processing module 802 is configured to determine the position of the center of the second image data in the first image data in the following manner: The position where the shooting deviation is the smallest in the domain is searched as the position of the center of the second image data in the first image data.
  • the image data processing module 802 is configured to implement fusion of the first image data and the second image data according to the first chrominance data and the first brightness data in the following manner Processing to obtain fused image data: when the viewing angle of the first image data is smaller than the viewing angle of the second image data, obtaining fourth brightness data from the first brightness data; the fourth The luminance data is the luminance data in the first luminance data with the same viewing angle as that of the first chromaticity data; performing scaling processing on the fourth luminance data to obtain a size equal to that of the first chromaticity data fifth luminance data of the same size; and combining the fifth luminance data and the first chrominance data to obtain the fused image data.
  • the first camera is a camera before switching, and the second camera is a camera after switching; or, the first camera is a camera before digital zooming, and the second camera is A digitally zoomed camera; or, the first camera is a camera currently performing digital zoom, and the second camera is a camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the image data processing module 802 is configured to implement fusion of the first image data and the second image data according to the first chrominance data and the first brightness data in the following manner Processing to obtain fused image data: when the field angle of the first image data is larger than the field angle of the second image data, obtain third image data from the first image data; the third The image data is image data having the same field of view angle as that of the second image data in the first image data; performing scaling processing on the second image data to obtain the same size as the third image data the fourth image data; combining the fifth chroma data in the third image data and the third luminance data in the fourth image data to obtain fifth image data; and combining the fifth image data in the first image data The third image data is replaced with the fifth image data to obtain the fused image data.
  • the image data processing module 802 is configured to implement fusion of the first image data and the second image data according to the first chrominance data and the first brightness data in the following manner Processing to obtain fused image data: when the field angle of the first image data is smaller than the field angle of the second image data, acquire sixth image data from the second image data; the sixth The image data is image data having the same field of view angle as that of the first image data in the second image data; performing scaling processing on the first image data to obtain the same size as the sixth image data the seventh image data; combining the sixth chroma data in the seventh image data and the sixth luminance data in the sixth image data to obtain eighth image data; and combining the sixth chroma data in the second image data The sixth image data is replaced with the eighth image data to obtain the fused image data.
  • the second camera is a camera before switching, and the first camera is a camera after switching; or, the second camera is a camera before digital zooming, and the first camera is A digitally zoomed camera; or, the second camera is a camera currently performing digital zoom, and the first camera is a camera with the smallest difference between the digital zoom range and the digital zoom range of the first camera.
  • the image data processing module 802 is configured to implement fusion of the first image data and the second image data according to the first chrominance data and the first brightness data in the following manner Processing to obtain fused image data: obtaining sixth chromaticity data from the first chromaticity data, obtaining fourth luminance data from the first luminance data; the field angle of the sixth chromaticity data and the The field angles of the fourth luminance data are the same; performing scaling processing on the sixth chromaticity data to obtain seventh chromaticity data having the same size as the fourth luminance data; combining the seventh chromaticity data and the Combining the fourth brightness data to obtain seventh image data; and replacing the image data in the second image data with the same position and size as the fourth brightness data with the seventh image data to obtain the fused image data .
  • the image data processing module 802 is configured to implement fusion of the first image data and the second image data according to the first chrominance data and the first brightness data in the following manner Processing to obtain fused image data: obtaining sixth chromaticity data from the first chromaticity data, obtaining fourth luminance data from the first luminance data; the field angle of the sixth chromaticity data and the The field angles of the fourth brightness data are the same; performing scaling processing on the fourth brightness data to obtain fifth brightness data having the same size as the sixth chrominance data; combining the fifth brightness data and the sixth chrominance data Combining the six chromaticity data to obtain the eighth image data; and replacing the image data in the first image data with the same position and size as the sixth chromaticity data with the eighth image data to obtain the fused image data .
  • FIG. 9 shows a block diagram of the composition of the terminal provided by the embodiment of the present application.
  • the terminal of the embodiment of the present application includes: a multi-camera module 901, a multi-channel ISP unit 902, a main control unit 903, and a data processing unit 904 , a display unit 905 , and a storage unit 906 .
  • the above-mentioned image data acquisition module 801 can be set in the multi-camera module 901
  • the image data processing module 802 can be set in the data processing unit 904 .
  • the multi-camera module 901 is configured to collect image data.
  • the multi-channel ISP unit 902 is configured to control the working status of the cameras in the multi-camera module 901 , and realize the processing of AWB, AF, AE, image color, etc. on the images collected by the cameras in the multi-camera module 901 .
  • the data processing unit 904 is configured to implement processing of image data, for example, to implement functions such as image fusion, image registration, and image FOV conversion.
  • the data processing unit 904 may be a program code stored in the storage unit 906, and when executed by the main control unit 903, functions such as image fusion, registration, and FOV conversion are realized.
  • the main control unit 903 is configured to control the work of other modules or units.
  • the main control unit 903 may be a CPU or an image processing chip, and implements functions such as image fusion, registration, and FOV conversion by executing programs of the data processing unit 904 .
  • the display unit 905 is configured to realize image preview and display of the control interface.
  • the display unit 905 may be a display screen of a terminal or the like.
  • the storage unit 906 is configured to realize storage of image data.
  • the storage unit 906 may be a built-in memory or an external storage device.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit circuit.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage, or may be used Any other medium that stores desired information and can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请提供了一种图像处理方法、一种电子设备、一种计算机可读存储介质,该图像处理方法包括:获取第一摄像头拍摄的第一图像对应的第一图像数据,获取第二摄像头拍摄的第二图像对应的第二图像数据;在所述第一图像数据的第一色度数据优于所述第二图像数据的第二色度数据、且所述第二图像数据的第一亮度数据优于所述第一图像数据的第二亮度数据的情况下,根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据;以及显示所述融合图像数据。

Description

图像处理方法、电子设备、以及计算机可读存储介质
相关申请的交叉引用
本申请要求于2021年8月5日提交的中国专利申请NO.202110899252.2的优先权,该中国专利申请的内容通过引用的方式整体合并于此。
技术领域
本申请实施例涉及终端技术领域,特别涉及图像处理方法、电子设备、以及计算机可读存储介质。
背景技术
多摄像头作为相机的核心买点,多摄协同拍照技术在快速发展中,可以极大地提升拍摄的满意度。多摄协同拍摄在进行数码变焦或摄像头切换时存在切换或变焦不够平滑的问题。
公开内容
第一方面,本申请实施例提供一种图像处理方法,包括:获取第一摄像头拍摄的第一图像对应的第一图像数据,获取第二摄像头拍摄的第二图像对应的第二图像数据;在所述第一图像数据的第一色度数据优于所述第二图像数据的第二色度数据、且所述第二图像数据的第一亮度数据优于所述第一图像数据的第二亮度数据的情况下,根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据;以及显示所述融合图像数据。
第二方面,本申请实施例提供一种电子设备,包括:至少一个处理器;以及存储器,存储器上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器执行时,实现上述图像处 理方法。
第三方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述图像处理方法。
附图说明
图1为本申请实施例提供的图像处理方法的流程图;
图2为本申请实施例提供的第一图像数据的示意图;
图3为本申请实施例提供的第二图像数据的示意图;
图4为本申请实施例提供的第三色度数据的示意图;
图5为本申请实施例提供的第四色度数据的示意图;
图6为本申请实施例提供的第一亮度数据的示意图;
图7为本申请实施例提供的融合图像数据的示意图;
图8为本申请实施例提供的图像处理装置的组成框图;
图9为本申请实施例提供的终端的组成框图;以及
图10本申请实施例提供的电子设备的组成框图。
具体实施方式
为使本领域的技术人员更好地理解本申请的技术方案,下面结合附图对本申请提供的图像处理方法、电子设备、以及计算机可读存储介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现,且本公开不应当被解释为限于本文阐述的实施例。提供这些实施例的目的在于使本申请更加透彻和完整,并使本领域技术人员充分理解本申请的范围。
在不冲突的情况下,本申请各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括至少一个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不限制本申请。 如本文所使用的,单数形式“一个”和“该”也包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在特定特征、整体、步骤、操作、元件和/或组件,但不排除存在或可添加至少一个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术术语和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本申请的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
多摄协同拍摄在进行数码变焦或摄像头切换时,主要是在图像信号处理(ISP,Image Signal Processing)端通过内置的3A算法,计算各个摄像头的自动白平衡(AWB,Auto White Balance)、自动曝光(AE,Automatic Exposure),根据色温、亮度的统计结果,设计不同摄像头的转换算法及校准系数,这种技术比较依赖于调试效果,对一致性变动较大的摄像头模组,易出现不同摄像头切换后摄像头之间效果不一致的情况,如色彩偏差、亮度异常、清晰度较差等问题,从而导致存在摄像头切换不够平滑的问题。
图1为本申请实施例提供的图像处理方法的流程图。
第一方面,参照图1,本申请实施例提供一种图像处理方法,适用于包括两个或两个以上摄像头的终端,终端中包括的摄像头可以包括但不限于如下组合:超广角摄像头+广角摄像头(主摄像头)、广角摄像头(主摄像头)+长焦摄像头、广角摄像头(主摄像头)+潜望摄像头、超广角摄像头+广角摄像头(主摄像头)+长焦摄像头、超广角摄像头+广角摄像头(主摄像头)+潜望摄像头、以及超广角摄像头+广角摄像头(主摄像头)+长焦摄像头+潜望摄像头。
该图像处理方法包括步骤100至102。
步骤100、获取第一摄像头拍摄的第一图像对应的第一图像数据,获取第二摄像头拍摄的第二图像对应的第二图像数据。
在一些示例性实施方式中,可以在进行数码变焦或摄像头切换 过程中,同时采用第一摄像头拍摄第一图像得到第一图像数据,采用第二摄像头拍摄第二图像得到第二图像数据。
在一些示例性实施方式中,也可以在进行数码变焦或摄像头切换后,采用第一摄像头拍摄第一图像得到第一图像数据,采用第二摄像头拍摄第二图像得到第二图像数据。
在一些示例性实施方式中,在进行数码变焦或摄像头切换过程中,或在进行数码变焦或摄像头切换后,在满足预设条件的情况下,采用第一摄像头拍摄第一图像得到第一图像数据,采用第二摄像头拍摄第二图像得到第二图像数据。
在一些示例性实施方式中,预设条件包括以下至少之一:
第一图像数据的第一色度数据优于第二图像数据的第二色度数据,第二图像数据的第一亮度数据优于第一图像数据的第二亮度数据;
第一图像数据的第二亮度数据优于第二图像数据的第一亮度数据,第二图像数据的第二色度数据优于第一图像数据的第一色度数据;
在将第一摄像头切换为第二摄像头的情况下,第二摄像头的色彩偏差大于第一摄像头的色彩偏差;色彩偏差是指同一场景上,存在肉眼可见的明显差异;
在将第一摄像头切换为第二摄像头的情况下,将第二图像数据的色彩状态调整到与第一图像数据的色彩状态相同所需要的时间大于或等于预设时间;
在将第一摄像头切换为第二摄像头的情况下,第二摄像头的亮度变动大于第一摄像头的亮度变动;以及
在将第一摄像头切换为第二摄像头的情况下,将第二图像数据的亮度状态调整到与第一图像数据的亮度状态相同所需要的时间大于或等于预设时间。
在一些示例性实施方式中,第一图像数据的第一色度数据优于第二图像数据的第二色度数据包括以下至少之一:
第一图像数据的第一色度数据的准确度大于第二图像数据的第二色度数据的准确度;以及
第一图像数据的第一色度数据的信噪比大于第二图像数据的第 二色度数据的信噪比。
在一些示例性实施方式中,第二图像数据的第一亮度数据优于第一图像数据的第二亮度数据包括以下至少之一:
第二图像数据的第一亮度数据的准确度大于第一图像数据的第二亮度数据的准确度;以及
第二图像数据的第一亮度数据的信噪比大于第一图像数据的第二亮度数据的信噪比。
这里需要说明的是,每一个摄像头在进行数码变焦时,摄像头的数码变焦范围是有限的,当数码变焦的焦距超出摄像头的数码变焦范围时,就需要进行摄像头的切换,一般情况下,是切换到数码变焦范围与当前摄像头的数码变焦范围相邻的摄像头,也就是与当前摄像头的数码变焦范围之差最小的摄像头。例如,摄像头一的数码变焦范围为1至5倍数码变焦,摄像头二的数码变焦范围为6至10倍数码变焦,当前摄像头一处于工作状态,焦距为3倍,用户进行数码变焦将焦距变为7倍,则变焦后的焦距超出摄像头一的数码变焦范围,切换为摄像头二进行工作,焦距变为7倍。
在一些示例性实施方式中,第一摄像头为切换前的摄像头,第二摄像头为切换后的摄像头;
或者,第一摄像头为数码变焦前的摄像头,第二摄像头为数码变焦后的摄像头;
或者,第一摄像头为当前进行数码变焦的摄像头,第二摄像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,第二摄像头为切换前的摄像头,所述第一摄像头为切换后的摄像头;
或者,第二摄像头为数码变焦前的摄像头,第一摄像头为数码变焦后的摄像头;
或者,第二摄像头为当前进行数码变焦的摄像头,第一摄像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,第一图像数据和第二图像数据可以同时拍摄,也可以不同时拍摄。
在一些示例性实施方式中,第一图像数据和第二图像数据中要有部分场景重合,也就是说,第一图像数据和第二图像数据中有部分图像数据是拍摄同一个场景的。
步骤101、在第一图像数据的第一色度数据优于第二图像数据的第二色度数据、且第二图像数据的第一亮度数据优于第一图像数据的第二亮度数据的情况下,根据第一色度数据和第一亮度数据对第一图像数据和第二图像数据进行融合处理得到融合图像数据。
在本申请实施例提供的图像处理方法中,图像数据的色度数据和亮度数据的好坏是由摄像头本身的硬件及图像算法综合决定的,传感器(sensor)的优劣和镜头的优劣等决定了RAW格式的图像数据的优劣,图像算法决定了YUV或YCbCr格式图像的优劣;RAW格式的图像数据为摄像头采集的最原始的图像数据,Y表示采用图像算法对RAW格式的图像数据进行处理后得到的处理后的图像数据的亮度数据,UV、CbCr表示采用图像算法对RAW格式的图像数据进行处理后得到的处理后的图像数据的色度数据。
下面分为六种情况分别说明如何获得融合图像数据。
情况一、第一图像数据的视场角大于所述第二图像数据的视场角;并且,第一摄像头为切换前的摄像头,第二摄像头为切换后的摄像头;或者,第一摄像头为数码变焦前的摄像头,第二摄像头为数码变焦后的摄像头;或者,第一摄像头为当前进行数码变焦的摄像头,第二摄像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,所述根据第一色度数据和第一亮度数据对第一图像数据和第二图像数据进行融合处理得到融合图像数据包括:在第一图像数据的视场角大于第二图像数据的视场角的情况下,从第一色度数据中获取第三色度数据;获取的第三色度数据为第一色度数据中视场角与第一亮度数据的视场角相同的色度数据;将第三色度数据进行缩放处理得到大小与第一亮度数据的大小相同的第四色度数据;以及将第四色度数据和第一亮度数据组合得到融合图像数据。
例如,第一图像数据如图2所示,第二图像数据如图3所示,第三色度数据如图4所示,第四色度数据如图5所示,第一亮度数据如图6所示,最终得到的融合图像数据如图7所示。
如图5所示,将第三色度数据放大后得到的第四色度数据比较模糊。如图7所示,将第四色度数据和第一亮度数据组合后得到的融合图像数据比第四色度数据更加清晰,同时保留了第四色度数据的色彩,从而使得摄像头切换过程更加平滑。
在一些示例性实施方式中,所述从第一色度数据中获取第三色度数据包括:根据第一图像数据的视场角和第二图像数据的视场角确定第三色度数据的大小;确定第二图像数据的中心在第一图像数据中的第一位置;以及根据第三色度数据的大小和第一位置从第一色度数据中获取第三色度数据。
在一些示例性实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第三色度数据的大小包括:根据第一图像数据的视场角和第二图像数据的视场角确定第一缩放系数;以及根据第一缩放系数确定第三色度数据的大小。
在一些示例性实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第一缩放系数包括:按照公式
Figure PCTCN2022106966-appb-000001
计算第一缩放系数;Fk1为第一缩放系数,FOV 2为第二图像数据的视场角,FOV 1为第一图像数据的视场角。
在一些示例性实施方式中,所述根据第一缩放系数确定第三色度数据的大小包括:确定第三色度数据的长为h1×Fk1,宽为w1×Fk1;Fk1为第一缩放系数,h1为第一色度数据的长,w1为第一色度数据的宽。
在一些示例性实施方式中,所述确定第二图像数据的中心在第一图像数据中的第一位置包括:根据预先存储的第一摄像头和第二摄像头之间的偏移量确定第一位置。
在一些示例性实施方式中,第一摄像头和第二摄像头之间的偏 移量为第一图像数据的中心和第二图像数据的中心之间的偏移量。
在一些示例性实施方式中,所述确定第二图像数据的中心在第一图像数据中的第一位置包括:在第一图像数据中预先设置的中心邻域中搜索拍摄偏差最小的位置作为第一位置。
在一些示例性实施方式中,在中心邻域中的某一个像素与第二图像数据的中心像素重叠的情况下,按照公式
Figure PCTCN2022106966-appb-000002
计算第一图像数据和第二图像数据的重叠部分的拍摄偏差,最小拍摄偏差对应的该像素的位置即为第一位置。
△E为拍摄偏差,h2为第一图像数据和第二图像数据的重叠部分的长,w2为第一图像数据和第二图像数据的重叠部分的宽,D 1(i,j)为第一图像数据中与第二图像数据重叠部分的图像数据的第i行第j列的像素值,D 2(i,j)为第二图像数据中与第一图像数据重叠部分的图像数据的第i行第j列的像素值。
在第一图像数据和第二图像数据的重叠部分的数据量大于预设阈值的情况下,可以截取重叠部分的部分区域的图像数据进行计算。当图像数据存在畸变时,为保证精度可以用定向快速旋转简化(ORB,Oriented Fast and Rotated Brief)算法、二进制鲁棒独立基本特征(BRISK,Binary Robust Independent Elementary Features)算法等其他算法实现图像对准的操作。
在一些示例性实施方式中,所述根据第三色度数据的大小和第一位置从第一色度数据中获取第三色度数据包括:从第一色度数据中以第一位置为中心获取大小为第三色度数据的大小的色度数据作为第三色度数据。
情况二、第一图像数据的视场角小于第二图像数据的视场角;并且,第二摄像头为切换前的摄像头,第一摄像头为切换后的摄像头;或者,第二摄像头为数码变焦前的摄像头,第一摄像头为数码变焦后的摄像头;或者,第二摄像头为当前进行数码变焦的摄像头,第一摄 像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,所述根据第一色度数据和第一亮度数据对第一图像数据和第二图像数据进行融合处理得到融合图像数据包括:在第一图像数据的视场角小于第二图像数据的视场角的情况下,从第一亮度数据中获取第四亮度数据;获取的第四亮度数据为第一亮度数据中视场角与第一色度数据的视场角相同的亮度数据;将第四亮度数据进行缩放处理得到大小与第一色度数据的大小相同的第五亮度数据;以及将第五亮度数据和第一色度数据组合得到融合图像数据。
在一些示例性实施方式中,所述从第一亮度数据中获取第四亮度数据包括:根据第一图像数据的视场角和第二图像数据的视场角确定第四亮度数据的大小;确定第一图像数据的中心在第二图像数据中的第二位置;以及根据第四亮度数据的大小和第二位置从第一亮度数据中获取第四亮度数据。
在一些示例性实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第四亮度数据的大小包括:根据第一图像数据的视场角和第二图像数据的视场角确定第二缩放系数;以及根据第二缩放系数确定第四亮度数据的大小。
在一些示例性实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第二缩放系数包括:按照公式
Figure PCTCN2022106966-appb-000003
计算第二缩放系数;Fk2为第二缩放系数,FOV 2为第二图像数据的视场角,FOV 1为第一图像数据的视场角。
在一些示例性实施方式中,所述根据第二缩放系数确定第四亮度数据的大小包括:确定第四亮度数据的长为h3×Fk2,宽为w3×Fk2;Fk2为第二缩放系数,h3为第一亮度数据的长,w3为第一亮度数据的宽。
在一些示例性实施方式中,所述确定第一图像数据的中心在第 二图像数据中的第二位置包括:根据预先存储的第一摄像头和第二摄像头之间的偏移量确定第二位置。
在一些示例性实施方式中,第一摄像头和第二摄像头之间的偏移量为第一图像数据的中心和第二图像数据的中心之间的偏移量。
在一些示例性实施方式中,所述确定第一图像数据的中心在第二图像数据中的第二位置包括:在第二图像数据中预先设置的中心邻域中搜索拍摄偏差最小的位置作为第二位置。
在一些示例性实施方式中,在中心邻域中的某一个像素与第一图像数据的中心像素重叠的情况下,按照公式
Figure PCTCN2022106966-appb-000004
计算第一图像数据和第二图像数据的重叠部分的拍摄偏差,最小拍摄偏差对应的该像素的位置即为第二位置。
△E为拍摄偏差,h2为第一图像数据和第二图像数据的重叠部分的长,w2为第一图像数据和第二图像数据的重叠部分的宽,D 1(i,j)为第一图像数据中与第二图像数据重叠部分的图像数据的第i行第j列的像素值,D 2(i,j)为第二图像数据中与第一图像数据重叠部分的图像数据的第i行第j列的像素值。
在第一图像数据和第二图像数据的重叠部分的数据量大于预设阈值的情况下,可以截取重叠部分的部分区域的图像数据进行计算。当图像数据存在畸变时,为保证精度可以用ORB算法,BRISK算法等其他算法实现图像对准的操作。
在一些示例性实施方式中,所述根据第四亮度数据的大小和第二位置从第一亮度数据中获取第四亮度数据包括:从第一亮度数据中以第二位置为中心获取大小为第四亮度数据的大小的亮度数据作为第四亮度数据。
情况三、第一图像数据的视场角大于所述第二图像数据的视场角;并且,第二摄像头为切换前的摄像头,第一摄像头为切换后的摄像头;或者,第二摄像头为数码变焦前的摄像头,第一摄像头为数码 变焦后的摄像头;或者,第二摄像头为当前进行数码变焦的摄像头,第一摄像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,所述根据第一色度数据和第一亮度数据对第一图像数据和第二图像数据进行融合处理得到融合图像数据包括:在第一图像数据的视场角大于第二图像数据的视场角的情况下,从第一图像数据中获取第三图像数据;获取的第三图像数据为第一图像数据中视场角与第二图像数据的视场角相同的图像数据;将第二图像数据进行缩放处理得到大小与第三图像数据的大小相同的第四图像数据;将第三图像数据中的第五色度数据和第四图像数据中的第三亮度数据组合得到第五图像数据;以及将第一图像数据中的第三图像数据替换为第五图像数据得到融合图像数据。
在一些示例性实施方式中,所述从第一图像数据中获取第三图像数据包括:根据第一图像数据的视场角和第二图像数据的视场角确定第三图像数据的大小;确定第二图像数据的中心在第一图像数据中的第一位置;以及根据第三图像数据的大小和第一位置从第一图像数据中获取第三图像数据。
在一些实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第三图像数据的大小包括:根据第一图像数据的视场角和第二图像数据的视场角确定第一缩放系数;以及根据第一缩放系数确定第三图像数据的大小。
在一些示例性实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第一缩放系数包括:按照公式
Figure PCTCN2022106966-appb-000005
计算第一缩放系数;Fk1为第一缩放系数,FOV 2为第二图像数据的视场角,FOV 1为第一图像数据的视场角。
在一些示例性实施方式中,所述根据第一缩放系数确定第三图像数据的大小包括:确定第三图像数据的长为h4×Fk1,宽为w4×Fk1;Fk1为第一缩放系数,h4为第一图像数据的长,w4为第一图像数据 的宽。
在一些示例性实施方式中,所述确定第二图像数据的中心在第一图像数据中的第一位置包括:根据预先存储的第一摄像头和第二摄像头之间的偏移量确定第一位置。
在一些示例性实施方式中,第一摄像头和第二摄像头之间的偏移量为第一图像数据的中心和第二图像数据的中心之间的偏移量。
在一些示例性实施方式中,所述确定第二图像数据的中心在第一图像数据中的第一位置包括:在第一图像数据中预先设置的中心邻域中搜索拍摄偏差最小的位置作为第一位置。
在一些示例性实施方式中,在中心邻域中的某一个像素与第二图像数据的中心像素重叠的情况下,按照公式
Figure PCTCN2022106966-appb-000006
计算第一图像数据和第二图像数据的重叠部分的拍摄偏差,最小拍摄偏差对应的该像素的位置即为第一位置。
△E为拍摄偏差,h2为第一图像数据和第二图像数据的重叠部分的长,w2为第一图像数据和第二图像数据的重叠部分的宽,D 1(i,j)为第一图像数据中与第二图像数据重叠部分的图像数据的第i行第j列的像素值,D 2(i,j)为第二图像数据中与第一图像数据重叠部分的图像数据的第i行第j列的像素值。
在第一图像数据和第二图像数据的重叠部分的数据量大于预设阈值的情况下,可以截取重叠部分的部分区域的图像数据进行计算。当图像数据存在畸变时,为保证精度可以用ORB算法,BRISK算法等其他算法实现图像对准的操作。
在一些示例性实施方式中,所述根据第三图像数据的大小和第一位置从第一图像数据中获取第三图像数据包括:从第一图像数据中以第一位置为中心获取大小为第三图像数据的大小的图像数据作为第三图像数据。
情况四、第一图像数据的视场角小于所述第二图像数据的视场 角;并且,第一摄像头为切换前的摄像头,第二摄像头为切换后的摄像头;或者,第一摄像头为数码变焦前的摄像头,第二摄像头为数码变焦后的摄像头;或者,第一摄像头为当前进行数码变焦的摄像头,第二摄像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,所述根据第一色度数据和第一亮度数据对第一图像数据和第二图像数据进行融合处理得到融合图像数据包括:在第一图像数据的视场角小于第二图像数据的视场角的情况下,从第二图像数据中获取第六图像数据;获取的第六图像数据为第二图像数据中视场角与第一图像数据的视场角相同的图像数据;将第一图像数据进行缩放处理得到大小与第六图像数据的大小相同的第七图像数据;将第七图像数据中的第六色度数据和第六图像数据中的第六亮度数据组合得到第八图像数据;以及将第二图像数据中的第六图像数据替换为第八图像数据得到融合图像数据。
在一些示例性实施方式中,所述从第二图像数据中获取第六图像数据包括:根据第一图像数据的视场角和第二图像数据的视场角确定第六图像数据的大小;确定第一图像数据的中心在第二图像数据中的第二位置;以及根据第六图像数据的大小和第二位置从第二图像数据中获取第六图像数据。
在一些示例性实施方式中,所述根据第一图像数据的视场角和第二图像数据的视场角确定第六图像数据的大小包括:根据第一图像数据的视场角和第二图像数据的视场角确定第二缩放系数;以及根据第二缩放系数确定第六图像数据的大小。
在一些示例性实施方式中,根据第一图像数据的视场角和第二图像数据的视场角确定第二缩放系数包括:按照公式
Figure PCTCN2022106966-appb-000007
计算第二缩放系数;Fk2为第二缩放系数,FOV 2为第二图像数据的视场角,FOV 1为第一图像数据的视场角。
在一些示例性实施方式中,所述根据第二缩放系数确定第六图 像数据的大小包括:确定第六图像数据的长为h4×Fk2,宽为w4×Fk2;Fk2为第二缩放系数,h4为第二图像数据的长,w4为第二图像数据的宽。
在一些示例性实施方式中,所述确定第一图像数据的中心在第二图像数据中的第二位置包括:根据预先存储的第一摄像头和第二摄像头之间的偏移量确定第二位置。
在一些示例性实施方式中,第一摄像头和第二摄像头之间的偏移量为第一图像数据的中心和第二图像数据的中心之间的偏移量。
在一些示例性实施方式中,所述确定第一图像数据的中心在第二图像数据中的第二位置包括:在第二图像数据中预先设置的中心邻域中搜索拍摄偏差最小的位置作为第二位置。
在一些示例性实施方式中,在中心邻域中的某一个像素与第一图像数据的中心像素重叠的情况下,按照公式
Figure PCTCN2022106966-appb-000008
计算第一图像数据和第二图像数据的重叠部分的拍摄偏差,最小拍摄偏差对应的该像素的位置即为第二位置。
△E为拍摄偏差,h2为第一图像数据和第二图像数据的重叠部分的长,w2为第一图像数据和第二图像数据的重叠部分的宽,D 1(i,j)为第一图像数据中与第二图像数据重叠部分的图像数据的第i行第j列的像素值,D 2(i,j)为第二图像数据中与第一图像数据重叠部分的图像数据的第i行第j列的像素值。
在第一图像数据和第二图像数据的重叠部分的数据量大于预设阈值的情况下,可以截取重叠部分的部分区域的图像数据进行计算。当图像数据存在畸变时,为保证精度可以用ORB算法,BRISK算法等其他算法实现图像对准的操作。
在一些示例性实施方式中,所述根据第六图像数据的大小和第二位置从第二图像数据中获取第六图像数据包括:从第二图像数据中以第二位置为中心获取大小为第六图像数据的大小的图像数据作为 第六图像数据。
情况五、第一摄像头为切换前的摄像头,第二摄像头为切换后的摄像头;或者,第一摄像头为数码变焦前的摄像头,第二摄像头为数码变焦后的摄像头;或者,第一摄像头为当前进行数码变焦的摄像头,第二摄像头为数码变焦范围与第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据包括:
从所述第一色度数据中获取第六色度数据,从所述第一亮度数据中获取第四亮度数据;所述第六色度数据的视场角和所述第四亮度数据的视场角相同;
将所述第六色度数据进行缩放处理得到大小与所述第四亮度数据的大小相同的第七色度数据;
将所述第七色度数据和所述第四亮度数据组合得到第七图像数据;以及
将所述第二图像数据中与所述第四亮度数据所在位置和大小相同的图像数据替换为所述第七图像数据得到所述融合图像数据。
在一些示例性实施方式中,第六色度数据和第四亮度数据所在的位置可以预先设定,也可以由用户选择确定。
当图像数据存在畸变时,为保证精度可以用ORB算法,BRISK算法等其他算法实现图像对准的操作。
情况六、第二摄像头为切换前的摄像头,第一摄像头为切换后的摄像头;或者,第二摄像头为数码变焦前的摄像头,第一摄像头为数码变焦后的摄像头;或者,第二摄像头为当前进行数码变焦的摄像头,第一摄像头为数码变焦范围与第二摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据包括:
从所述第一色度数据中获取第六色度数据,从所述第一亮度数据中获取第四亮度数据;所述第六色度数据的视场角和所述第四亮度数据的视场角相同;
将所述第四亮度数据进行缩放处理得到大小与所述第六色度数据的大小相同的第五亮度数据;
将所述第五亮度数据和所述第六色度数据组合得到第八图像数据;以及
将所述第一图像数据中与所述第六色度数据所在位置和大小相同的图像数据替换为所述第八图像数据得到所述融合图像数据。
当图像数据存在畸变时,为保证精度可以用ORB算法,BRISK算法等其他算法实现对图像对准的操作。
步骤102、显示融合图像数据。
本申请实施例提供的图像处理方法,在所述第一图像数据的第一色度数据优于所述第二图像数据的第二色度数据,且所述第二图像数据的第一亮度数据优于所述第一图像数据的第二亮度数据的情况下,根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据,在保留第一图像数据的色度数据的同时提高了融合图像数据的清晰度,提高了在进行数码变焦或摄像头切换时切换或变焦过程的平滑度。
第二方面,本申请实施例提供一种电子设备,如图10所示,包括:
至少一个处理器1001(图10中仅示出一个);以及
存储器1002,存储器1002上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器1001执行时,实现上述图像处理方法。
处理器1001为具有数据处理能力的器件,包括但不限于中央处理器(CPU)等;存储器1002为具有数据存储能力的器件,包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH)。
在一些实施方式中,处理器1001、存储器1002通过总线相互连 接,进而与计算设备的其它组件连接。
第三方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现上述图像处理方法。
图8为本申请实施例提供的图像处理装置的组成框图。
第四方面,参照图8,本申请实施例提供一种图像处理装置,该图像处理装置包括图像数据获取模块801、图像数据处理模块802和显示模块803。
图像数据获取模块801配置为获取第一摄像头拍摄的第一图像对应的第一图像数据,获取第二摄像头拍摄的第二图像对应的第二图像数据。
图像数据处理模块802配置为在所述第一图像数据的第一色度数据优于所述第二图像数据的第二色度数据、且所述第二图像数据的第一亮度数据优于所述第一图像数据的第二亮度数据的情况下,根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据。
显示模块803,配置为显示所述融合图像数据。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据:在所述第一图像数据的视场角大于所述第二图像数据的视场角的情况下,从所述第一色度数据中获取第三色度数据;所述第三色度数据为所述第一色度数据中视场角与所述第一亮度数据的视场角相同的色度数据;将所述第三色度数据进行缩放处理得到大小与所述第一亮度数据的大小相同的第四色度数据;以及将所述第四色度数据和所述第一亮度数据组合得到所述融合图像数据。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现从所述第一色度数据中获取第三色度数据:根据所述第一图像数据的视场角和所述第二图像数据的视场角确定所述第三色度数据的大小;确定所述第二图像数据的中心在所述第一图像数据中 的第一位置;以及根据所述第三色度数据的大小和确定的第一位置从所述第一色度数据中获取所述第三色度数据。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一图像数据的视场角和所述第二图像数据的视场角确定所述第三色度数据的大小:根据所述第一图像数据的视场角和所述第二图像数据的视场角确定第一缩放系数;以及根据所述第一缩放系数确定所述第三色度数据的大小。
在一些示例性实施方式中,所述根据所述第一图像数据的视场角和所述第二图像数据的视场角确定第一缩放系数:按照公式
Figure PCTCN2022106966-appb-000009
计算所述第一缩放系数;Fk1为所述第一缩放系数,FOV 2为所述第二图像数据的视场角,FOV 1为所述第一图像数据的视场角。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一缩放系数确定所述第三色度数据的大小:确定所述第三色度数据的长为h1×Fk1,宽为w1×Fk1;Fk1为所述第一缩放系数,h1为所述第一色度数据的长,w1为所述第一色度数据的宽。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现确定所述第二图像数据的中心在所述第一图像数据中的位置:根据预先存储的所述第一摄像头和所述第二摄像头之间的偏移量确定所述第二图像数据的中心在所述第一图像数据中的位置。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现确定所述第二图像数据的中心在所述第一图像数据中的位置:在第一图像数据中预先设置的中心邻域中搜索拍摄偏差最小的位置作为第二图像数据的中心在第一图像数据中的位置。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一色度数据和所述第一亮度数据对所述第 一图像数据和所述第二图像数据进行融合处理得到融合图像数据:在所述第一图像数据的视场角小于所述第二图像数据的视场角的情况下,从所述第一亮度数据中获取第四亮度数据;所述第四亮度数据为所述第一亮度数据中视场角与所述第一色度数据的视场角相同的亮度数据;将所述第四亮度数据进行缩放处理得到大小与所述第一色度数据的大小相同的第五亮度数据;以及将所述第五亮度数据和所述第一色度数据组合得到所述融合图像数据。
在一些示例性实施方式中,所述第一摄像头为切换前的摄像头,所述第二摄像头为切换后的摄像头;或者,所述第一摄像头为数码变焦前的摄像头,所述第二摄像头为数码变焦后的摄像头;或者,所述第一摄像头为当前进行数码变焦的摄像头,所述第二摄像头为数码变焦范围与所述第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据:在所述第一图像数据的视场角大于所述第二图像数据的视场角的情况下,从所述第一图像数据中获取第三图像数据;所述第三图像数据为所述第一图像数据中视场角与所述第二图像数据的视场角相同的图像数据;将所述第二图像数据进行缩放处理得到大小与所述第三图像数据的大小相同的第四图像数据;将所述第三图像数据中的第五色度数据和所述第四图像数据中的第三亮度数据组合得到第五图像数据;以及将所述第一图像数据中的所述第三图像数据替换为所述第五图像数据得到所述融合图像数据。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据:在所述第一图像数据的视场角小于所述第二图像数据的视场角的情况下,从所述第二图像数据中获取第六图像数据;所述第六图像数据为所述第二图像数据中视场角与所述第一图像数据的视场角相同的图像数据;将所述第一图像数据进行缩放处理得到大小与所述第六图像 数据的大小相同的第七图像数据;将所述第七图像数据中的第六色度数据和所述第六图像数据中的第六亮度数据组合得到第八图像数据;以及将所述第二图像数据中的所述第六图像数据替换为所述第八图像数据得到所述融合图像数据。
在一些示例性实施方式中,所述第二摄像头为切换前的摄像头,所述第一摄像头为切换后的摄像头;或者,所述第二摄像头为数码变焦前的摄像头,所述第一摄像头为数码变焦后的摄像头;或者,所述第二摄像头为当前进行数码变焦的摄像头,所述第一摄像头为数码变焦范围与所述第一摄像头的数码变焦范围之差最小的摄像头。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据:从所述第一色度数据中获取第六色度数据,从所述第一亮度数据中获取第四亮度数据;所述第六色度数据的视场角和所述第四亮度数据的视场角相同;将所述第六色度数据进行缩放处理得到大小与所述第四亮度数据的大小相同的第七色度数据;将所述第七色度数据和所述第四亮度数据组合得到第七图像数据;以及将所述第二图像数据中与所述第四亮度数据所在位置和大小相同的图像数据替换为所述第七图像数据得到所述融合图像数据。
在一些示例性实施方式中,图像数据处理模块802配置为采用以下方式实现根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据:从所述第一色度数据中获取第六色度数据,从所述第一亮度数据中获取第四亮度数据;所述第六色度数据的视场角和所述第四亮度数据的视场角相同;将所述第四亮度数据进行缩放处理得到大小与所述第六色度数据的大小相同的第五亮度数据;将所述第五亮度数据和所述第六色度数据组合得到第八图像数据;以及将所述第一图像数据中与所述第六色度数据所在位置和大小相同的图像数据替换为所述第八图像数据得到所述融合图像数据。
上述图像处理装置的具体实现过程与前述的图像处理方法的具 体实现过程相同,这里不再赘述。
图9给出了本申请实施例提供的终端的组成框图,如图9所示,本申请实施例的终端包括:多摄像头模块901、多路ISP单元902、主控单元903、数据处理单元904、显示单元905、以及存储单元906。上述图像数据获取模块801可以设置在多摄像头模块901中,图像数据处理模块802可以设置在数据处理单元904中。
多摄像头模块901配置为采集图像数据。
多路ISP单元902配置为控制多摄像头模块901中的摄像头的工作状态,实现对多摄像头模块901中的摄像头采集的图像的AWB、AF、AE、图像色彩等处理。
数据处理单元904配置为实现对图像数据的处理,例如实现图像的融合、图像的配准、图像的FOV转换等功能。所述数据处理单元904可以是存储在存储单元906中的程序代码,被主控单元903执行时实现图像融合、配准、FOV转换等功能。
主控单元903配置为控制其他模块或单元的工作。所述主控单元903可以是CPU或图像处理芯片,通过执行数据处理单元904的程序来实现图像融合、配准、FOV转换等功能。
显示单元905配置为实现图像预览和控制界面的显示。所述显示单元905可以是终端的显示屏等。
存储单元906配置为实现图像数据的存储。所述存储单元906可以是内置的存储器或外接的存储装置等。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器(如中央处理器、数字信号处理器或微处理器)执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或 非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储器、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则与特定实施例相结合描述的特征、特性和/或元素可单独使用,或可与结合其它实施例描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本申请的范围的情况下,可进行各种形式和细节上的改变。

Claims (13)

  1. 一种图像处理方法,包括:
    获取第一摄像头拍摄的第一图像对应的第一图像数据,获取第二摄像头拍摄的第二图像对应的第二图像数据;
    在所述第一图像数据的第一色度数据优于所述第二图像数据的第二色度数据、且所述第二图像数据的第一亮度数据优于所述第一图像数据的第二亮度数据的情况下,根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据;以及
    显示所述融合图像数据。
  2. 根据权利要求1所述的图像处理方法,其中,所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据包括:
    在所述第一图像数据的视场角大于所述第二图像数据的视场角的情况下,从所述第一色度数据中获取第三色度数据;其中,所述第三色度数据为所述第一色度数据中视场角与所述第一亮度数据的视场角相同的色度数据;
    将所述第三色度数据进行缩放处理得到大小与所述第一亮度数据的大小相同的第四色度数据;以及
    将所述第四色度数据和所述第一亮度数据组合得到所述融合图像数据。
  3. 根据权利要求2所述的图像处理方法,其中,所述从所述第一色度数据中获取第三色度数据包括:
    根据所述第一图像数据的视场角和所述第二图像数据的视场角确定所述第三色度数据的大小;
    确定所述第二图像数据的中心在所述第一图像数据中的第一位置;以及
    根据所述第三色度数据的大小和所述第一位置从所述第一色度数据中获取所述第三色度数据。
  4. 根据权利要求3所述的图像处理方法,其中,所述根据所述第一图像数据的视场角和所述第二图像数据的视场角确定所述第三色度数据的大小包括:
    根据所述第一图像数据的视场角和所述第二图像数据的视场角确定第一缩放系数;以及
    根据所述第一缩放系数确定所述第三色度数据的大小。
  5. 根据权利要求4所述的图像处理方法,其中,所述根据所述第一缩放系数确定所述第三色度数据的大小包括:
    确定所述第三色度数据的长为h1×Fk1,宽为w1×Fk1;
    其中,Fk1为所述第一缩放系数,h1为所述第一色度数据的长,w1为所述第一色度数据的宽。
  6. 根据权利要求3所述的图像处理方法,其中,所述确定所述第二图像数据的中心在所述第一图像数据中的第一位置包括:
    根据预先存储的所述第一摄像头和所述第二摄像头之间的偏移量确定所述第一位置;
    或者,在所述第一图像数据中预先设置的中心邻域中搜索拍摄偏差最小的位置作为所述第一位置。
  7. 根据权利要求1所述的图像处理方法,其中,所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据包括:
    在所述第一图像数据的视场角小于所述第二图像数据的视场角的情况下,从所述第二图像数据中获取第六图像数据;其中,所述第六图像数据为所述第二图像数据中视场角与所述第一图像数据的视场角相同的图像数据;
    将所述第一图像数据进行缩放处理得到大小与所述第六图像数据的大小相同的第七图像数据;
    将所述第七图像数据中的第六色度数据和所述第六图像数据中的第六亮度数据组合得到第八图像数据;以及
    将所述第二图像数据中的所述第六图像数据替换为所述第八图像数据得到所述融合图像数据。
  8. 根据权利要求2至7中任意一项所述的图像处理方法,其中,所述第一摄像头为切换前的摄像头,所述第二摄像头为切换后的摄像头;
    或者,所述第一摄像头为数码变焦前的摄像头,所述第二摄像头为数码变焦后的摄像头;
    或者,所述第一摄像头为当前进行数码变焦的摄像头,所述第二摄像头为数码变焦范围与所述第一摄像头的数码变焦范围之差最小的摄像头。
  9. 根据权利要求1所述的图像处理方法,其中,所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据包括:
    在所述第一图像数据的视场角小于所述第二图像数据的视场角的情况下,从所述第一亮度数据中获取第四亮度数据;其中,所述第四亮度数据为所述第一亮度数据中视场角与所述第一色度数据的视场角相同的亮度数据;
    将所述第四亮度数据进行缩放处理得到大小与所述第一色度数据的大小相同的第五亮度数据;以及
    将所述第五亮度数据和所述第一色度数据组合得到所述融合图像数据;
    或者,
    在所述第一图像数据的视场角大于所述第二图像数据的视场角的情况下,从所述第一图像数据中获取第三图像数据;其中,所述第 三图像数据为所述第一图像数据中视场角与所述第二图像数据的视场角相同的图像数据;
    将所述第二图像数据进行缩放处理得到大小与所述第三图像数据的大小相同的第四图像数据;
    将所述第三图像数据中的第五色度数据和所述第四图像数据中的第三亮度数据组合得到第五图像数据;
    将所述第一图像数据中的所述第三图像数据替换为所述第五图像数据得到所述融合图像数据。
  10. 根据权利要求9所述的图像处理方法,其中,所述第二摄像头为切换前的摄像头,所述第一摄像头为切换后的摄像头;
    或者,所述第二摄像头为数码变焦前的摄像头,所述第一摄像头为数码变焦后的摄像头;
    或者,所述第二摄像头为当前进行数码变焦的摄像头,所述第一摄像头为数码变焦范围与所述第一摄像头的数码变焦范围之差最小的摄像头。
  11. 根据权利要求1所述的图像处理方法,其中,所述根据所述第一色度数据和所述第一亮度数据对所述第一图像数据和所述第二图像数据进行融合处理得到融合图像数据包括:
    从所述第一色度数据中获取第六色度数据,从所述第一亮度数据中获取第四亮度数据;其中,所述第六色度数据的视场角和所述第四亮度数据的视场角相同;
    将所述第六色度数据进行缩放处理得到大小与所述第四亮度数据的大小相同的第七色度数据;
    将所述第七色度数据和所述第四亮度数据组合得到第七图像数据;
    将所述第二图像数据中与所述第四亮度数据所在位置和大小相同的图像数据替换为所述第七图像数据得到所述融合图像数据;
    或者,
    从所述第一色度数据中获取第六色度数据,从所述第一亮度数据中获取第四亮度数据;其中,所述第六色度数据的视场角和所述第四亮度数据的视场角相同;
    将所述第四亮度数据进行缩放处理得到大小与所述第六色度数据的大小相同的第五亮度数据;
    将所述第五亮度数据和所述第六色度数据组合得到第八图像数据;
    将所述第一图像数据中与所述第六色度数据所在位置和大小相同的图像数据替换为所述第八图像数据得到所述融合图像数据。
  12. 一种电子设备,包括:
    至少一个处理器;以及
    存储器,所述存储器上存储有至少一个计算机程序,当所述至少一个计算机程序被所述至少一个处理器执行时,实现根据权利要求1至11中任意一项所述的图像处理方法。
  13. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至11中任意一项所述的图像处理方法。
PCT/CN2022/106966 2021-08-05 2022-07-21 图像处理方法、电子设备、以及计算机可读存储介质 WO2023011197A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22851911.2A EP4358019A1 (en) 2021-08-05 2022-07-21 Image processing method, electronic device, and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110899252.2 2021-08-05
CN202110899252.2A CN115908210A (zh) 2021-08-05 2021-08-05 图像处理方法、电子设备、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023011197A1 true WO2023011197A1 (zh) 2023-02-09

Family

ID=85155143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106966 WO2023011197A1 (zh) 2021-08-05 2022-07-21 图像处理方法、电子设备、以及计算机可读存储介质

Country Status (3)

Country Link
EP (1) EP4358019A1 (zh)
CN (1) CN115908210A (zh)
WO (1) WO2023011197A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894449A (zh) * 2015-11-11 2016-08-24 乐卡汽车智能科技(北京)有限公司 克服图像融合中颜色突变的方法及系统
WO2017152402A1 (zh) * 2016-03-09 2017-09-14 华为技术有限公司 终端的图像处理方法、装置和终端
US20170289555A1 (en) * 2016-03-30 2017-10-05 Dolby Laboratories Licensing Corporation Chroma Reshaping
CN111223058A (zh) * 2019-12-27 2020-06-02 杭州雄迈集成电路技术股份有限公司 一种图像增强方法
CN111784603A (zh) * 2020-06-29 2020-10-16 珠海全志科技股份有限公司 一种raw域图像去噪方法、计算机装置及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894449A (zh) * 2015-11-11 2016-08-24 乐卡汽车智能科技(北京)有限公司 克服图像融合中颜色突变的方法及系统
WO2017152402A1 (zh) * 2016-03-09 2017-09-14 华为技术有限公司 终端的图像处理方法、装置和终端
US20170289555A1 (en) * 2016-03-30 2017-10-05 Dolby Laboratories Licensing Corporation Chroma Reshaping
CN111223058A (zh) * 2019-12-27 2020-06-02 杭州雄迈集成电路技术股份有限公司 一种图像增强方法
CN111784603A (zh) * 2020-06-29 2020-10-16 珠海全志科技股份有限公司 一种raw域图像去噪方法、计算机装置及计算机可读存储介质

Also Published As

Publication number Publication date
CN115908210A (zh) 2023-04-04
EP4358019A1 (en) 2024-04-24

Similar Documents

Publication Publication Date Title
US11039059B2 (en) Imaging capturing device and imaging capturing method
US9325899B1 (en) Image capturing device and digital zooming method thereof
JP4994311B2 (ja) 顔検出方法、顔検出装置及びデジタルカメラ
KR20200041382A (ko) 듀얼 카메라 기반 이미징 방법, 이동 단말기 및 저장 매체
US10827107B2 (en) Photographing method for terminal and terminal
US8340512B2 (en) Auto focus technique in an image capture device
US20100039535A1 (en) Photographic apparatus
WO2019006762A1 (zh) 一种图像捕捉装置及方法
WO2023011197A1 (zh) 图像处理方法、电子设备、以及计算机可读存储介质
JP2015177510A (ja) カメラシステム、画像処理方法及びプログラム
CN111614865B (zh) 多摄亮度同步方法、设备、装置及存储介质
US11467482B2 (en) Signal processing device, signal processing method, and image capture device
JP6115815B2 (ja) 合成画像生成装置及び合成画像生成方法
JP2011253099A (ja) 焦点検出装置
JP6645614B2 (ja) 撮像装置及び電子機器
US11044396B2 (en) Image processing apparatus for calculating a composite ratio of each area based on a contrast value of images, control method of image processing apparatus, and computer-readable storage medium
WO2014136703A1 (ja) 撮像装置及び画像表示方法
JP2009218641A (ja) 撮像装置、撮像方法及び撮像装置が備えるコンピュータに撮像方法を実行させるためのプログラム
JP2019197305A (ja) 画像処理装置、画像処理方法及びプログラム
CN116506742A (zh) 摄像头切换方法、装置、电子设备和计算机可读存储介质
CN116320743A (zh) 对象识别方法、装置、电子设备以及计算机存储介质
JP2006186543A (ja) 撮像装置及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22851911

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022851911

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022851911

Country of ref document: EP

Effective date: 20240116

NENP Non-entry into the national phase

Ref country code: DE