WO2022267506A1 - Image fusion method, electronic device, storage medium, and computer program product - Google Patents

Image fusion method, electronic device, storage medium, and computer program product Download PDF

Info

Publication number
WO2022267506A1
WO2022267506A1 PCT/CN2022/077713 CN2022077713W WO2022267506A1 WO 2022267506 A1 WO2022267506 A1 WO 2022267506A1 CN 2022077713 W CN2022077713 W CN 2022077713W WO 2022267506 A1 WO2022267506 A1 WO 2022267506A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
exposure
dynamic range
images
high dynamic
Prior art date
Application number
PCT/CN2022/077713
Other languages
French (fr)
Chinese (zh)
Inventor
乔晓磊
丁大钧
肖斌
陈珂
朱聪超
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022267506A1 publication Critical patent/WO2022267506A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to an image fusion method, electronic equipment, a computer-readable storage medium and a computer program product.
  • the embodiments of the present application provide an image fusion method, an electronic device, a computer-readable storage medium, and a computer program product, which can improve the definition of a fused image.
  • the embodiment of the present application provides an image fusion method, including:
  • the first high dynamic range image and the second high dynamic range image are fused to obtain a fused image.
  • the above process can restore some image details lost due to overexposure of the small field of view image by performing high dynamic range fusion processing on the small field of view image, thereby improving the clarity of the fused image.
  • fusing the at least two frames of second images with different exposures into a second high dynamic range image may include:
  • the image features of the first high dynamic range image to guide the process of fusing the second images with HDR effects, so that the obtained second high dynamic range image and the first high dynamic range image have the same or similar brightness in the overlapping area and other image features, so that the image obtained after fusing the two high dynamic range images will also have the same or similar image features as the first high dynamic range image.
  • This process can be mainly based on the image features of each pixel in the first high dynamic range image, and the image features of each pixel in the second high dynamic range image are calculated from the weights of each second image, and then calculated according to The weights of are used to fuse the respective second images into a second high dynamic range image.
  • the image features of each pixel in the first high dynamic range image are obtained from the respective The weights of the second image may include:
  • each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image calculate and obtain each pixel in the first high dynamic range image
  • the image features of each come from the weights of each of the first images
  • the fusion weight of each first image can be calculated, and then calculated accordingly to obtain the first high dynamic range image.
  • the image features of each pixel in the two high dynamic range images come from the weights of the second images respectively.
  • a weight reuse method or an image feature multiplexing method may be used.
  • the image features of each pixel in the second high dynamic range image are calculated according to the weights of the image features of each pixel in the first high dynamic range image from each of the first images.
  • the weights from each of the second images may include:
  • the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images
  • the weights are determined as the weights of the image features of the target pixel points from the respective second images.
  • a second high dynamic range image having image characteristics similar to that of the first high dynamic range image can be obtained by using weight reuse.
  • the image features of each pixel in the first high dynamic range image are obtained from the respective The weights of the second image may include:
  • any target pixel in the second high dynamic range image according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
  • the at least two frames of second images with different exposures include a first exposure image and a second exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image; according to the target
  • the image features of the corresponding pixels of the pixels in the first high dynamic range image and the image features of the corresponding pixels of the target pixels in each of the second images are calculated to obtain the image of the target pixels
  • the features come from the weights of each of the second images, and may include:
  • the image features of the target pixel points are calculated according to the following formula from the weights of the first exposure image and the second exposure image:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • A indicates that the target pixel
  • B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image
  • P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
  • A, B, and P are all known values, so the weight X and weight Y can be calculated.
  • the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image; according to the image feature of the corresponding pixel point of the target pixel point in the first high dynamic range image and the target pixel point In the image features of the corresponding pixels in each of the second images, calculating the image features of the target pixels from the weights of each of the second images may include:
  • any one of the weights of an exposure image, the second exposure image and the third exposure image is a set value
  • the image features of the target pixel point are calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • Z indicates that the target pixel
  • A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image
  • B represents the image feature of the target pixel point in the second exposure image.
  • C represents the image feature of the corresponding pixel of the target pixel in the third exposure image
  • P represents the image feature of the target pixel in the first high dynamic range image
  • A, B, C and P are all known values, but since there are three unknown quantities X, Y and Z, it is necessary to introduce additional restrictive conditions to calculate the values of X, Y and Z value.
  • the image features of the corresponding pixel points of the target pixel points in the first high dynamic range image come from the respective first images
  • the image features of the target pixel points come from the first exposure image, the second exposure image, and the second exposure image respectively.
  • Any one of the weights of the exposure image and the third exposure image is a set value.
  • acquiring at least two frames of first images with different exposures captured by the first camera may include:
  • Acquiring at least two frames of second images with different exposures captured by the second camera may include:
  • the camera When the camera captures images, it will obtain the corresponding preview stream, that is, the data stream corresponding to the photo preview interface displayed on the display interface of the electronic device after the user turns on the camera.
  • the exposure corresponding to the preview stream is generally the default value set by the camera. You can use the exposure amount corresponding to the preview stream as a benchmark, increase the exposure amount by a certain percentage on this benchmark, and obtain an image with more than one frame of high exposure, and reduce the exposure amount by a certain percentage on this benchmark, and obtain a frame by shooting Image above with less exposure.
  • an image fusion device including:
  • An image acquisition module configured to acquire at least two frames of first images with different exposures captured by the first camera, and acquire at least two frames of second images with different exposures captured by the second camera, wherein the first camera The field of view angle is greater than the field of view angle of the second camera, and the first image includes the second image;
  • a high dynamic range processing module configured to fuse the at least two frames of first images with different exposures into a first high dynamic range image, and fuse the at least two frames of second images with different exposures into a second high dynamic range range image;
  • An image fusion module configured to fuse the first high dynamic range image and the second high dynamic range image to obtain a fused image.
  • the high dynamic range processing module may include:
  • a fusion weight calculation unit configured to calculate, according to the image features of each pixel in the first high dynamic range image, that the image features of each pixel in the second high dynamic range image come from each of the first high dynamic range images. The weight of the second image;
  • a high dynamic range fusion unit configured to combine the at least two frames of second images with different exposures according to the weights of the image features of each pixel in the second high dynamic range image from each of the second images fused into the second high dynamic range image.
  • the fusion weight calculation unit may include:
  • the first fusion weight calculation subunit is used to calculate and obtain the first high dynamic range image according to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image.
  • the image features of each pixel in a high dynamic range image respectively come from the weights of each of the first images;
  • the second fusion weight calculation subunit is used to calculate and obtain the weights in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images.
  • the image features of each pixel point come from the weights of the second images respectively.
  • the second fusion weight calculation subunit can be specifically configured to: for any target pixel in the second high dynamic range image, combine the target pixel in the first high dynamic range image
  • the image features of the corresponding pixel points come from the weights of each of the first images, and the image features determined as the target pixel points come from the weights of each of the second images.
  • the fusion weight calculation unit may include:
  • the third fusion weight calculation subunit is configured to, for any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the calculated image features of the target pixel are respectively derived from the weights of each of the second images, wherein the target pixel
  • the image feature of the point is equal to the image feature of the corresponding pixel point of the target pixel point in the first high dynamic range image.
  • the at least two frames of second images with different exposures include a first exposure image and a second exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image;
  • the third The fusion weight calculation subunit can be specifically configured to: calculate the weights of the image features of the target pixel from the first exposure image and the second exposure image respectively according to the following formula:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • A indicates that the target pixel
  • B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image
  • P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
  • the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image;
  • the third fusion weight calculation subunit may specifically include:
  • a weight setting subunit configured to set the weight of the target pixel according to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image from each of the first images.
  • Image features are respectively from any one of the weights of the first exposure image, the second exposure image and the third exposure image as a set value;
  • the formula calculation subunit is used to calculate the weights of the image features of the target pixel from the first exposure image, the second exposure image and the third exposure image respectively according to the following formula:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • Z indicates that the target pixel
  • A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image
  • B represents the image feature of the target pixel point in the second exposure image.
  • C represents the image feature of the corresponding pixel of the target pixel in the third exposure image
  • P represents the image feature of the target pixel in the first high dynamic range image
  • the image acquisition module may include:
  • the first image capturing unit is configured to capture more than one frame of images with an exposure greater than a first reference exposure through the first camera, and the first reference exposure is the exposure corresponding to the preview stream of the first camera ;
  • the second image capturing unit is configured to capture more than one frame of images with an exposure less than the first reference exposure through the first camera;
  • a first image determining unit configured to determine the image with an exposure amount of more than one frame greater than the first reference exposure amount and the image with an exposure amount of more than one frame less than the first reference exposure amount as the at least two frame the first image with different exposures;
  • the third image capturing unit is configured to obtain more than one frame of images with an exposure greater than a second reference exposure through the second camera, and the second reference exposure is the exposure corresponding to the preview stream of the second camera ;
  • a fourth image capturing unit configured to capture more than one frame of images with an exposure less than the second reference exposure through the second camera
  • the second image determining unit is configured to determine the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is less than the second reference exposure amount for more than one frame as the at least two Frame a second image with a different exposure.
  • an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, , the electronic device implements the following image fusion method:
  • the first high dynamic range image and the second high dynamic range image are fused to obtain a fused image.
  • the electronic device fuses the at least two frames of second images with different exposures into a second high dynamic range image, which may include:
  • the electronic device calculates the image features of each pixel in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image, respectively Weights from each of said second images may include:
  • each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image calculate and obtain each pixel in the first high dynamic range image
  • the image features of each come from the weights of each of the first images
  • the electronic device calculates the weight of each pixel in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images.
  • the image features of the points come from the weights of each of the second images, which may include:
  • the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images
  • the weights are determined as the weights of the image features of the target pixel points from the respective second images.
  • the electronic device calculates the image features of each pixel in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image, respectively Weights from each of said second images may include:
  • any target pixel in the second high dynamic range image according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
  • the at least two frames of second images with different exposures include a first exposure image and a second exposure image, and the exposure of the first exposure image is greater than the exposure of the second exposure image;
  • the electronic device calculates according to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images Obtaining the image features of the target pixel points respectively from the weights of each of the second images may include:
  • the image features of the target pixel points are calculated according to the following formula from the weights of the first exposure image and the second exposure image:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • A indicates that the target pixel
  • B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image
  • P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
  • the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image;
  • the target is calculated and obtained.
  • the image features of the pixels come from the weights of each of the second images, which may include:
  • any one of the weights of an exposure image, the second exposure image and the third exposure image is a set value
  • the image features of the target pixel point are calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • Z indicates that the target pixel
  • A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image
  • B represents the image feature of the target pixel point in the second exposure image.
  • C represents the image feature of the corresponding pixel of the target pixel in the third exposure image
  • P represents the image feature of the target pixel in the first high dynamic range image
  • the electronic device acquiring at least two frames of first images with different exposures captured by the first camera may include:
  • the electronic device acquires at least two frames of second images with different exposures captured by the second camera, which may include:
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed, the image fusion as proposed in the first aspect of the embodiment of the present application is realized method.
  • the embodiment of the present application provides a computer program product, which, when the computer program product runs on the electronic device, causes the electronic device to execute the image fusion method as proposed in the first aspect of the embodiment of the present application.
  • FIG. 1 is a hardware structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 2 is a flow chart of an image fusion method provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of the shooting range of two cameras with different field of view angles adopted in the embodiment of the present application;
  • Fig. 4 is a schematic diagram of a large field of view image and a corresponding small field of view image provided by the embodiment of the present application;
  • FIG. 5 is a schematic diagram of an operation principle of an image fusion method provided in an embodiment of the present application.
  • Fig. 6 is a schematic diagram of the effect of the long exposure frame of the main path, the short exposure frame of the main path and the high dynamic range image of the main path in Fig. 5;
  • Fig. 7 is a schematic diagram of the effect of the long exposure frame of the auxiliary road, the short exposure frame of the auxiliary road and the high dynamic range image of the auxiliary road in Fig. 5;
  • Fig. 8 is a schematic diagram of the effect of fusing the high dynamic range image of the main road in Fig. 6 and the high dynamic range image of the auxiliary road in Fig. 7;
  • FIG. 9 is a structural diagram of an image fusion device provided in an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • An electronic device (such as a mobile phone) can usually be provided with multiple cameras with different viewing angles, such as a normal camera, a telephoto camera, and a wide-angle camera.
  • multi-camera joint photography can be used to improve the quality of photos. Improve the clarity of the corresponding area of the image with a large field of view.
  • people will also perform high-dynamic range (High-Dynamic Range, HDR) fusion processing on large-field-of-view images to obtain corresponding high-dynamic-range images, and then combine the high-dynamic-range images with Small field of view image fusion.
  • HDR High-Dynamic Range
  • this application proposes an image fusion method, which performs high dynamic range fusion processing on the small field of view image, and then fuses with the high dynamic range image of the large field of view image to obtain the fused image.
  • image fusion method which performs high dynamic range fusion processing on the small field of view image, and then fuses with the high dynamic range image of the large field of view image to obtain the fused image.
  • the image fusion method proposed in this application can be applied to various electronic devices with at least two cameras with different field of view, such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) equipment, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), smart home equipment, etc.
  • augmented reality augmented reality, AR
  • virtual reality virtual reality
  • VR virtual reality
  • notebook computer notebook computer
  • ultra-mobile personal computer ultra-mobile personal computer
  • netbook personal digital assistant
  • PDA personal digital assistant
  • smart home equipment etc.
  • the specific type of equipment is not limited in any way.
  • FIG. 1 shows a block diagram of a partial structure of the mobile phone provided by the embodiment of the present application.
  • mobile phone comprises: radio frequency (Radio Frequency, RF) circuit 101, memory 102, input unit 103, display unit 104, sensor 105, audio circuit 106, wireless fidelity (wireless fidelity, WiFi) module 107, processor 108 , power supply 109, common camera 110 and telephoto camera 111 and other components.
  • radio frequency Radio Frequency, RF
  • memory 102 input unit 103
  • display unit 104 sensor 105
  • audio circuit 106 wireless fidelity (wireless fidelity, WiFi) module
  • processor 108 wireless fidelity module
  • power supply 109 common camera 110 and telephoto camera 111 and other components.
  • the RF circuit 101 can be used for sending and receiving information or receiving and sending signals during a call.
  • the processor 108 After receiving the downlink information of the base station, it is processed by the processor 108; in addition, the designed uplink data is sent to the base station.
  • an RF circuit includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like.
  • the RF circuit 101 can also communicate with networks and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), email, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 102 can be used to store software programs and modules, and the processor 108 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 102 .
  • the memory 102 can mainly include a program storage area and a data storage area, wherein the program storage area can store operating devices, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); Data created by the use of mobile phones (such as audio data, phonebook, etc.), etc.
  • the memory 102 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the input unit 103 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile phone.
  • the input unit 103 may include a touch panel 1031 and other input devices 1032 .
  • the touch panel 1031 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1031 or near the touch panel 1031). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1031 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and sends it to the to the processor 108, and can receive and execute commands sent by the processor 108.
  • the touch panel 1031 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 103 may also include other input devices 1032 .
  • other input devices 1032 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 104 can be used to display information input by or provided to the user and various menus of the mobile phone.
  • the display unit 104 may include a display panel 1041.
  • the display panel 1041 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • the touch panel 1031 may cover the display panel 1041, and when the touch panel 1031 detects a touch operation on or near it, it transmits to the processor 108 to determine the type of the touch event, and then the processor 108 according to the touch event The type provides a corresponding visual output on the display panel 1041 .
  • the touch panel 1031 and the display panel 1041 are used as two independent components to realize the input and input functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 can be integrated to form a mobile phone. Realize the input and output functions of the mobile phone.
  • the handset may also include at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of the ambient light, and the proximity sensor may turn off the display panel 1041 and/or when the mobile phone is moved to the ear. or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the application of mobile phone posture (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. repeat.
  • mobile phone posture such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tap
  • other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. repeat.
  • the audio circuit 106, the speaker 1061, and the microphone 1062 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 106 can transmit the electrical signal converted from the received audio data to the loudspeaker 1061, and the loudspeaker 1061 converts it into a sound signal output; After being received, it is converted into audio data, and then the audio data is processed by the output processor 108, and then sent to another mobile phone through the RF circuit 101, or the audio data is output to the memory 102 for further processing.
  • WiFi is a short-distance wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 107, which provides users with wireless broadband Internet access.
  • FIG. 1 shows the WiFi module 107, it can be understood that it is not an essential component of the mobile phone, and can be completely omitted as required without changing the essence of the application.
  • the processor 108 is the control center of the mobile phone, and uses various interfaces and lines to connect various parts of the entire mobile phone. By running or executing software programs and/or modules stored in the memory 102, and calling data stored in the memory 102, execution Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole.
  • the processor 108 may include one or more processing units; preferably, the processor 108 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating devices, user interfaces and application programs, etc. , the modem processor mainly handles wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 108 .
  • the mobile phone also includes a power supply 109 (such as a battery) for supplying power to each component.
  • a power supply 109 (such as a battery) for supplying power to each component.
  • the power supply can be logically connected to the processor 108 through a power management device, so that functions such as charging, discharging, and power consumption management can be realized through the power management device.
  • the mobile phone also includes at least two cameras with different viewing angles, for example, one of them is a normal camera 110, and the other is a telephoto camera 111. .
  • the phone may also include other types of cameras such as infrared cameras, hyperspectral cameras, and wide-angle cameras.
  • the position of the camera on the mobile phone may be front or rear, which is not limited in this embodiment of the present application.
  • the mobile phone may also include a bluetooth module, etc., which will not be repeated here.
  • Figure 2 shows a flow chart of an image fusion method provided by an embodiment of the present application, including:
  • the electronic device acquires at least two frames of first images with different exposures captured by the first camera, and acquires at least two frames of second images with different exposures captured by the second camera;
  • the electronic device has at least two cameras with different viewing angles, namely a first camera and a second camera, wherein the viewing angle of the first camera is larger than that of the second camera.
  • the embodiment of the present application does not limit the specific types of the first camera and the second camera.
  • the first camera is an ordinary camera
  • the second camera can be a telephoto camera
  • the first camera is a wide-angle camera
  • the second camera can be It can be a normal camera or a telephoto camera, and so on.
  • the first camera and the second camera should be set on the same surface of the electronic device, and keep the same or similar shooting angles when taking images, so that the shooting range of the first camera covers the shooting range of the second camera , that is, the first image captured by the first camera includes the second image captured by the second camera.
  • the first image may be recorded as an image with a large viewing angle
  • the second image may be recorded as an image with a small viewing angle.
  • the camera can control the exposure of the captured image by adjusting parameters such as aperture size and exposure time.
  • parameters such as aperture size and exposure time.
  • by adjusting the parameters of the first camera at least two frames of the first image with different exposures can be obtained, and by adjusting the parameters of the second
  • the parameters of the second camera are taken to obtain at least two frames of second images with different exposures.
  • acquiring at least two frames of first images with different exposures captured by the first camera may include:
  • the exposure corresponding to the preview stream of the first camera may be acquired as a reference value, that is, the first reference exposure.
  • the camera captures an image, it will obtain the corresponding preview stream, that is, the data stream corresponding to the photo preview interface of the electronic device after the user turns on the camera.
  • the exposure corresponding to the preview stream is generally the default value set by the camera.
  • Take the exposure corresponding to the preview stream as a benchmark increase the exposure by a certain percentage on this benchmark, and shoot more than one frame with a larger exposure (which can be called a long exposure frame), and reduce the exposure by a certain percentage on this benchmark Exposure, more than one frame of images with less exposure (may be referred to as short exposure frames) is obtained by shooting.
  • the exposure corresponding to the preview stream of the first camera is M
  • the long-exposure frame and the short-exposure frame are two acquired first images with different exposures.
  • Acquiring at least two frames of second images with different exposures captured by the second camera may include:
  • the same method as that of the first camera may be used to acquire at least two frames of second images with different exposures captured by the second camera. It should be noted that the number of the first image and the number of the second image may be the same or different; there is no corresponding size-limited relationship between the exposure amount of each frame of the first image and the exposure amount of each frame of the second image.
  • the electronic device fuses the at least two frames of first images with different exposures into a first high dynamic range image, and fuses the at least two frames of second images with different exposures into a second high dynamic range image;
  • HDR High-Dynamic Range, HDR
  • fusion processing refers to synthesizing a high dynamic range image based on multiple frames of low dynamic range images with different exposures, so as to obtain more image details and improve image clarity.
  • the low dynamic range image with a large exposure is mainly used to restore the image details of the dark area of the scene
  • the low dynamic range image with a small exposure is mainly used to restore the image details of the bright area of the scene.
  • the obtained fused image will also have the same characteristics as the first high dynamic range image.
  • the image features of the high dynamic range images differ greatly, and in conventional image fusion scenarios, it is generally expected that the obtained fused image has the same or similar image features as the first high dynamic range image.
  • the process of fusing the second images of each frame can be guided by the HDR effect according to the image characteristics such as the brightness of the first high dynamic range image, so that the obtained second high dynamic range image has the same quality as the first high dynamic range image.
  • Image characteristics such as the same or similar brightness of the image.
  • fusing the at least two frames of second images with different exposures into a second high dynamic range image may include:
  • the image characteristics such as RGB value, brightness value, etc.
  • the features are respectively calculated to obtain the image features of each pixel in the second high dynamic range image from the weights of each second image, that is, to calculate the weight of each second image for fusion.
  • the basic principle is to make the second image obtained after fusion
  • the high dynamic range image has the same or similar image features as the first high dynamic range image.
  • the HDR fusion process can be performed on each second image to obtain the corresponding first Two high dynamic range images.
  • the image feature of a pixel point Q in the second high dynamic range image to be generated comes from the weight of I 1 is X, and the weight from I 2 is Y, the pixel point
  • the image feature of Q corresponding to the pixel point in I1 is A, and the image feature of the corresponding pixel point in I2 is B, then when I1 and I2 are fused into the second high dynamic range image, Q is at the
  • the image feature in the second high dynamic range image is A*X+B*Y, and by analogy, the image feature of each pixel in the second high dynamic range image can be calculated.
  • the image features of each pixel in the first high dynamic range image are calculated from the respective The weight of the second image may include:
  • the first image of each first image can be calculated
  • the fusion weights that is, the image features of each pixel in the first high dynamic range image respectively come from the weights of the first images.
  • each first image is a total of 3 frames of low dynamic range images including a long exposure frame, a medium exposure frame, and a short exposure frame, wherein the exposure amount of the long exposure frame>the exposure amount of the middle exposure frame>the exposure amount of the short exposure frame,
  • the image feature of a certain pixel point Q in the long exposure frame is A
  • the image feature in the medium exposure frame is B
  • the image feature in the short exposure frame is C
  • the first height can be obtained by using the following formulas respectively
  • the image features of pixel Q in the dynamic range image come from the weight X of the long-exposure frame, the weight Y of the medium-exposure frame, and the weight Z of the short-exposure frame:
  • weights whose image features come from each first image can be calculated in the same manner as the above pixel Q.
  • the weights in the second high dynamic range image are calculated and obtained.
  • the image features of each pixel are respectively derived from the weights of the second images, which may include:
  • the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images
  • the weights are determined as the weights of the image features of the target pixel points from the respective second images.
  • each first image is a first long exposure frame, a first medium exposure frame and a first short exposure frame
  • each second image is a second long exposure frame, a second medium exposure frame and a second short exposure frame
  • the first The image feature of a target pixel point Q in a high dynamic range image is P
  • the image feature of Q in the first long exposure frame is A
  • the image feature of Q in the first medium exposure frame is B
  • the image feature of Q in the first short exposure frame is
  • the image feature in the frame is C
  • the image feature in the second long exposure frame is D
  • the image feature in the second medium exposure frame is E
  • the image feature in the second short exposure frame is F
  • P A*X+B*Y+C*Z
  • the image feature S D* of Q in the second high dynamic range image X+E*Y+F*Z.
  • the image feature may be a feature of any channel in the RGB domain, or a feature of the Y channel (brightness) in the YUV domain.
  • a second high dynamic range image having image characteristics similar to that of the first high dynamic range image can be obtained by using weight reuse.
  • the image features of each pixel in the first high dynamic range image are calculated from The weights of each of the second images may include:
  • any target pixel in the second high dynamic range image according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
  • the difference in image features between each first image and each second image is relatively large, in order to meet the requirement that the image features of the second high dynamic range image and the image features of the first high dynamic range image are the same or similar , it is inappropriate to use weight reuse to determine the fusion weight of each second image at this time.
  • image feature multiplexing can be adopted, that is, the fusion weight of each second image can be calculated by using the same or similar image feature of each corresponding pixel in the two high dynamic range images as a known condition.
  • each second image includes a first exposure image and a second exposure image
  • the exposure of the first exposure image is greater than the exposure of the second exposure image
  • the image of the target pixel can be calculated according to the following formula The features come from the weights of the first exposure image and the second exposure image respectively:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • A indicates that the target pixel is in the first exposure image.
  • B represents the image feature of the corresponding pixel of the target pixel in the second exposure image
  • P represents the corresponding pixel of the target pixel in the first high dynamic range image
  • the image feature of the point (or other values close to the image feature).
  • A, B, and P are all known values, so the weight X and weight Y can be calculated.
  • the image feature of the corresponding pixel of the target pixel in the first high dynamic range image is 92
  • the image feature of the corresponding pixel of the target pixel in the first exposure image is 120
  • the target pixel is 120 in the second exposure image.
  • the image feature of the corresponding pixel in the image is 10, then the following formula can be obtained:
  • each pixel can be calculated in the same way to obtain the corresponding fusion weight.
  • each pixel is fused according to its corresponding fusion weight to obtain the image A second high dynamic range image having the same characteristics as the first high dynamic range image.
  • each second image includes a first exposure image, a second exposure image, and a third exposure image, wherein the exposure amount of the first exposure image is greater than the exposure amount of the second exposure image, and the exposure amount of the second exposure image is The exposure amount of the second exposure image is greater than the exposure amount of the third exposure image; at this time, the image features of the target pixels can be calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image respectively :
  • X represents that the image feature of the target pixel point comes from the weight of the first exposure image
  • Y represents that the image feature of the target pixel point comes from the weight of the second exposure image
  • Z represents the image feature of the target pixel point Weights from the third exposure image
  • A represents the image feature of the corresponding pixel of the target pixel in the first exposure image
  • B represents the image feature of the corresponding pixel of the target pixel in the second exposure image
  • C represents the image feature of the corresponding pixel of the target pixel in the third exposure image
  • P represents the image feature of the corresponding pixel of the target pixel in the first high dynamic range image.
  • A, B, C and P are all known values, but since there are three unknown quantities X, Y and Z, it is necessary to introduce additional restrictive conditions to calculate the values of X, Y and Z value.
  • the image features of the corresponding pixel points of the target pixel points in the first high dynamic range image come from the respective first images
  • the image features of the target pixel points come from the first exposure image, the second exposure image, and the second exposure image respectively.
  • Any one of the weights of the exposure image and the third exposure image is a set value.
  • each first image includes a fourth exposure image, a fifth exposure image, and a sixth exposure image with sequentially decreasing exposure amounts
  • the image feature of the corresponding pixel of the target pixel in the fourth exposure image is 100 (fusion weight 50%)
  • the image feature of the corresponding pixel of the target pixel in the fifth exposure image is 50 (fusion weight is 20%)
  • the image feature of the corresponding pixel of the target pixel in the fifth exposure image is 10 ( The fusion weight is 30%)
  • the image feature of the corresponding pixel of the target pixel in the first exposure image is 120
  • the image feature of the corresponding pixel of the target pixel in the second exposure image is 60
  • the image feature of the target pixel in the third exposure image is 5, then according to the above formula, it can be obtained:
  • any weight of X, Y, and Z can be set as a set value.
  • the fusion weight corresponding to the target pixel point in the fourth exposure image is the largest (50%)
  • the corresponding fusion weight X in the exposure image is 50%
  • the values of the other two weights Y and Z can be calculated.
  • the image features targeted by this calculation method can be the features of any channel in the RGB domain, or the features of the Y channel (brightness) in the YUV domain.
  • the characteristics of the UV channel (chroma) in the YUV domain can be directly used.
  • the characteristics of the UV channel of the target pixel point can use the corresponding UV value in the second exposure image , as the UV value of the target pixel in the fused second high dynamic range image.
  • the obtained second high dynamic range image can have the same or similar image characteristics as the first high dynamic range image, such as brightness, so as to meet the requirements of the specified image fusion scene.
  • the electronic device fuses the first high dynamic range image and the second high dynamic range image to obtain a fused image.
  • the two high dynamic range images can be fused, specifically, the second high dynamic range image is fused to the first high dynamic range image and the first high dynamic range image. Regions corresponding to the two high dynamic range images, so as to obtain the fused image.
  • the high dynamic range image fusion process mainly includes image registration and image feature superposition. For details, reference may be made to related content on image fusion in the prior art, which will not be repeated here.
  • the above process can restore some image details lost due to overexposure of the small field of view image by performing high dynamic range fusion processing on the small field of view image, thereby improving the clarity of the fused image.
  • FIG. 3 is a schematic diagram of shooting ranges of two cameras with different field of view angles adopted in the embodiment of the present application.
  • the camera with a larger field of view is the first camera
  • the camera with a smaller field of view is the second camera
  • the first camera and the second camera are located on the same surface of an electronic device.
  • the shooting ranges of the two cameras are shown respectively by two rectangular boxes in Fig. 3, obviously the shooting range of the first camera includes the shooting range of the second camera, so the image (first image) captured by the first camera includes the shooting range of the second camera The captured image (second image).
  • the first camera can be a camera with a larger field of view such as a wide-angle camera or an ordinary camera
  • the second camera can be a camera with a smaller field of view such as a telephoto camera
  • Fig. 4 is a schematic diagram of an image with a large field of view and a corresponding image with a small field of view provided by an embodiment of the present application.
  • the image on the left is an image with a large field of view, that is, a first image captured by the first camera mentioned above;
  • the image on the right is an image with a small field of view, that is, the image captured by the second camera mentioned above
  • a second image is obtained.
  • the image with a small field of view is usually fused into the image with a large field of view.
  • some areas of the image with a small field of view may be overexposed (such as the overexposed area marked in Figure 4), resulting in loss of image details, which will affect the clarity of the fused image.
  • the embodiment of the present application proposes an image fusion method, and a schematic diagram of its specific operation principle is shown in FIG. 5 .
  • the main road that is, the shooting channel where the main camera is located, which can usually be an ordinary camera with a large field of view
  • the main road long exposure frame and the main road short exposure frame are obtained by shooting, and the exposure of the main road long exposure frame is greater than the exposure of the main road short exposure frame; then, perform HDR fusion on the main road long exposure frame and the main road short exposure frame Operation to obtain the high dynamic range image of the main road.
  • the auxiliary road (that is, the shooting channel where the auxiliary camera is located, which can usually be a telephoto camera with a small field of view) will set different exposure parameters according to the preview stream of the auxiliary camera, and the long exposure of the auxiliary road will be obtained by shooting Frames and short-exposure frames of the auxiliary road, wherein the exposure of the long-exposure frame of the auxiliary road is greater than the exposure of the short-exposure frame of the auxiliary road; then, an HDR fusion operation is performed on the long-exposure frame of the auxiliary road and the short-exposure frame of the auxiliary road to obtain a high dynamic range image of the auxiliary road. Finally, the high dynamic range image of the main road and the high dynamic range image of the auxiliary road are fused to obtain the fused image.
  • the HDR fusion process of the long-exposure frame of the auxiliary road and the short-exposure frame of the auxiliary road can be guided according to the image characteristics of the high dynamic range image of the main road (as indicated by the dotted line in FIG. 5 ). shown), so that the obtained auxiliary road high dynamic range image has the same or similar brightness and other image characteristics as the main road high dynamic range image. That is to say, the main road can guide the auxiliary road with HDR effects, and the specific guidance method can refer to the related content above.
  • Figure 6 is the long exposure frame of the main road, the short exposure frame of the main road and the high dynamic range image of the main road in Figure 5 Effect diagram.
  • the long exposure frame of the main road and the short exposure frame of the main road are images of different exposures for the same balcony scene, and the long exposure frame of the main road is an image with a larger exposure, which shows that the overall brightness of the image is brighter;
  • the short-exposure frame of the main road is an image with a small exposure, and it can be seen that the overall brightness of the image is relatively dark.
  • the high dynamic range image of the main road is obtained. It can be seen that the brightness of the image is moderate, and the long-exposure frame of the main road and/or the short-exposure frame of the main road are restored to a certain extent missing image details.
  • FIG 7 it is a schematic diagram of the effect of the long exposure frame of the auxiliary road, the short exposure frame of the auxiliary road and the high dynamic range image of the auxiliary road in Figure 5.
  • the long exposure frame of the auxiliary road and the short exposure frame of the auxiliary road are images of different exposures for the same balcony scene (the same scene as in Figure 6), and the long exposure frame of the auxiliary road is an image with a larger exposure, which can be seen
  • the overall brightness is brighter; the short-exposure frame of the auxiliary path is an image with a small exposure, so the overall brightness of the image is relatively dark.
  • the long exposure frame of the main road shown in Figure 6 contains the long exposure frame of the auxiliary road shown in Figure 7
  • the short exposure frame of the main road shown in Figure 6 contains 7 shows the short-exposure frame of the auxiliary path.
  • the high dynamic range image of the auxiliary road is obtained. It can be seen that the brightness of the image is moderate, and the image details lost in the long exposure frame of the auxiliary road and/or the short exposure frame of the auxiliary road are restored to a certain extent .
  • the obtained high dynamic range image of the auxiliary road in Figure 7 has image characteristics such as brightness that are the same or similar to the high dynamic range image of the main road in Figure 6 .
  • FIG. 8 it is a schematic diagram of the effect of fusing the high dynamic range image of the main road in FIG. 6 and the high dynamic range image of the auxiliary road in FIG. 7 . Since the high dynamic range image of the main road and the high dynamic range image of the auxiliary road have the same or similar image characteristics, the influence of image fusion on image characteristics such as brightness and color can be reduced, so that the obtained fused image has the same or similar image characteristics as the main road image image features.
  • the right side of Fig. 8 is a schematic diagram of the fused image, in which the area within the dotted frame is the target area of image fusion, and it can be seen that the target area has obtained a certain degree of sharpness improvement effect.
  • FIG. 9 shows a structural block diagram of an image fusion device provided in the embodiment of the present application.
  • the device includes:
  • An image acquisition module 901 configured to acquire at least two frames of first images with different exposures captured by the first camera, and acquire at least two frames of second images with different exposures captured by the second camera, wherein the first The angle of view of the camera is larger than the angle of view of the second camera, and the first image includes the second image;
  • a high dynamic range processing module 902 configured to fuse the at least two frames of first images with different exposures into a first high dynamic range image, and fuse the at least two frames of second images with different exposures into a second high dynamic range image dynamic range images;
  • the image fusion module 903 is configured to fuse the first high dynamic range image and the second high dynamic range image to obtain a fused image.
  • the high dynamic range processing module may include:
  • a fusion weight calculation unit configured to calculate, according to the image features of each pixel in the first high dynamic range image, that the image features of each pixel in the second high dynamic range image come from each of the first high dynamic range images. The weight of the second image;
  • a high dynamic range fusion unit configured to combine the at least two frames of second images with different exposures according to the weights of the image features of each pixel in the second high dynamic range image from each of the second images fused into the second high dynamic range image.
  • the fusion weight calculation unit may include:
  • the first fusion weight calculation subunit is used to calculate and obtain the first high dynamic range image according to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image.
  • the image features of each pixel in a high dynamic range image respectively come from the weights of each of the first images;
  • the second fusion weight calculation subunit is used to calculate and obtain the weights in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images.
  • the image features of each pixel point come from the weights of the second images respectively.
  • the second fusion weight calculation subunit can be specifically configured to: for any target pixel in the second high dynamic range image, combine the target pixel in the first high dynamic range image
  • the image features of the corresponding pixel points come from the weights of each of the first images, and the image features determined as the target pixel points come from the weights of each of the second images.
  • the fusion weight calculation unit may include:
  • the third fusion weight calculation subunit is configured to, for any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the calculated image features of the target pixel are respectively derived from the weights of each of the second images, wherein the target pixel
  • the image feature of the point is equal to the image feature of the corresponding pixel point of the target pixel point in the first high dynamic range image.
  • the at least two frames of second images with different exposures include a first exposure image and a second exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image;
  • the third The fusion weight calculation subunit can be specifically configured to: calculate the weights of the image features of the target pixel from the first exposure image and the second exposure image respectively according to the following formula:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • A indicates that the target pixel
  • B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image
  • P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
  • the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image;
  • the third fusion weight calculation subunit may specifically include:
  • a weight setting subunit configured to set the weight of the target pixel according to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image from each of the first images.
  • Image features are respectively from any one of the weights of the first exposure image, the second exposure image and the third exposure image as a set value;
  • the formula calculation subunit is used to calculate the weights of the image features of the target pixel from the first exposure image, the second exposure image and the third exposure image respectively according to the following formula:
  • X indicates that the image feature of the target pixel comes from the weight of the first exposure image
  • Y indicates that the image feature of the target pixel comes from the weight of the second exposure image
  • Z indicates that the target pixel
  • A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image
  • B represents the image feature of the target pixel point in the second exposure image.
  • C represents the image feature of the corresponding pixel of the target pixel in the third exposure image
  • P represents the image feature of the target pixel in the first high dynamic range image
  • the image acquisition module may include:
  • the first image capturing unit is configured to capture more than one frame of images with an exposure greater than a first reference exposure through the first camera, and the first reference exposure is the exposure corresponding to the preview stream of the first camera ;
  • the second image capturing unit is configured to capture more than one frame of images with an exposure less than the first reference exposure through the first camera;
  • a first image determining unit configured to determine the image with an exposure amount of more than one frame greater than the first reference exposure amount and the image with an exposure amount of more than one frame less than the first reference exposure amount as the at least two frame the first image with different exposures;
  • the third image capturing unit is configured to obtain more than one frame of images with an exposure greater than a second reference exposure through the second camera, and the second reference exposure is the exposure corresponding to the preview stream of the second camera ;
  • a fourth image capturing unit configured to capture more than one frame of images with an exposure less than the second reference exposure through the second camera
  • the second image determining unit is configured to determine the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is less than the second reference exposure amount for more than one frame as the at least two Frame a second image with a different exposure.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, various image fusion methods as proposed in the present application are implemented.
  • the embodiment of the present application also provides a computer program product, which, when the computer program product runs on the electronic device, causes the electronic device to execute each image fusion method proposed in the present application.
  • Fig. 10 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 of this embodiment includes: at least one processor 1000 (only one is shown in FIG. 10 ), a processor, a memory 1001, and stored in the memory 1001 and can be processed in the at least one processor.
  • the electronic device may include, but not limited to, a processor 1000 and a memory 1001 .
  • FIG. 10 is only an example of the electronic device 100, and does not constitute a limitation to the electronic device 100. It may include more or less components than shown in the figure, or combine certain components, or different components. , for example, may also include input and output devices, network access devices, and so on.
  • the so-called processor 1000 may be a central processing unit (Central Processing Unit, CPU), and the processor 1000 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit) , ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory 1001 may be an internal storage unit of the electronic device 100 in some embodiments, such as a hard disk or a memory of the electronic device 100.
  • the memory 1001 may also be an external storage device of the electronic device 100 in other embodiments, such as a plug-in hard disk equipped on the electronic device 100, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 1001 may also include both an internal storage unit of the electronic device 100 and an external storage device.
  • the memory 1001 is used to store operating devices, application programs, bootloader programs (BootLoader), data, and other programs, such as program codes of the computer programs.
  • the memory 1001 can also be used to temporarily store data that has been output or will be output.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • all or part of the processes in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in computer-readable storage media.
  • the steps in the foregoing method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program codes to electronic equipment, recording media, computer memory, read-only memory (ROM, Read-Only Memory), random-access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random-access memory
  • electrical carrier signals telecommunication signals
  • software distribution media Such as U disk, mobile hard disk, magnetic disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.

Abstract

The present application is applicable to the technical field of image processing, and provides an image fusion method, an electronic device, a computer readable storage medium, and a computer program product. The method comprises: obtaining at least two frames of first images having different exposure amounts and photographed by a first camera, and obtaining at least two frames of second images having different exposure amounts and photographed by a second camera, wherein the angle of field of view of the first camera is greater than the angle of field of view of the second camera, and the first images comprise the second images; fusing the at least two frames of the first images having different exposure amounts into a first high dynamic range image, and fusing the at least two frames of the second images having different exposure amounts into a second high dynamic range image; and fusing the first high dynamic range image and the second high dynamic range image to obtain a fused image. By means of this processing, some of image details that are lost due to the excessive exposure of an image of a small angle of field of view can be restored, and thus, the definition of the fused image is improved.

Description

图像融合方法、电子设备、存储介质及计算机程序产品Image fusion method, electronic device, storage medium and computer program product
本申请要求于2021年6月23日提交国家知识产权局、申请号为202110707247.7、申请名称为“图像融合方法、电子设备、存储介质及计算机程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the State Intellectual Property Office on June 23, 2021, with the application number 202110707247.7 and the application name "Image fusion method, electronic equipment, storage medium and computer program product", the entire content of which Incorporated in this application by reference.
技术领域technical field
本申请属于图像处理技术领域,尤其涉及一种图像融合方法、电子设备、计算机可读存储介质及计算机程序产品。The present application belongs to the technical field of image processing, and in particular relates to an image fusion method, electronic equipment, a computer-readable storage medium and a computer program product.
背景技术Background technique
现有的电子设备通常会配置多个摄像头,通过将多个摄像头拍摄的图像进行融合处理,可有效提高拍摄图像的效果。然而,发明人发现,在将长焦摄像头拍摄的小视场角图像融合到普通摄像头拍摄的大视场角图像时,由于逆光等原因小视场角图像的部分区域会出现过度曝光而损失图像细节,影响融合后图像的清晰度。Existing electronic devices are usually configured with multiple cameras, and the effect of captured images can be effectively improved by fusing images captured by the multiple cameras. However, the inventors found that when the small field of view image taken by the telephoto camera is fused to the large field of view image taken by the ordinary camera, some areas of the small field of view image will be overexposed due to backlighting and other reasons, and the image details will be lost. Affects the sharpness of the fused image.
发明内容Contents of the invention
有鉴于此,本申请实施例提供了一种图像融合方法、电子设备、计算机可读存储介质及计算机程序产品,可以提升融合后图像的清晰度。In view of this, the embodiments of the present application provide an image fusion method, an electronic device, a computer-readable storage medium, and a computer program product, which can improve the definition of a fused image.
第一方面,本申请实施例提供了一种图像融合方法,包括:In the first aspect, the embodiment of the present application provides an image fusion method, including:
获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,其中,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第一图像包含所述第二图像;Acquiring at least two frames of first images with different exposures captured by the first camera, and acquiring at least two frames of second images with different exposures captured by the second camera, wherein the angle of view of the first camera is larger than the The angle of view of the second camera, the first image includes the second image;
将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;fusing the at least two frames of first images with different exposures into a first high dynamic range image, and fusing the at least two frames of second images with different exposures into a second high dynamic range image;
对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。The first high dynamic range image and the second high dynamic range image are fused to obtain a fused image.
在本申请实施例中,首先,获取至少两帧不同曝光量的大视场角图像以及至少两帧不同曝光量的小视场角图像;然后,分别对大视场角图像以及小视场角图像进行高动态范围融合处理,获得大视场角图像对应的第一高动态范围图像以及小视场角图像对应的第二高动态范围图像;最后,将第一高动态范围图像和第二高动态范围图像融合,获得融合后的图像。上述过程通过对小视场角图像进行高动态范围融合处理,能够恢复小视场角图像出现过度曝光而损失的部分图像细节,从而提升融合后图像的清晰度。In the embodiment of the present application, at first, at least two frames of images with a large field of view with different exposures and at least two frames of images with a small field of view with different exposures are obtained; High dynamic range fusion processing, obtaining the first high dynamic range image corresponding to the large field of view image and the second high dynamic range image corresponding to the small field of view image; finally, combining the first high dynamic range image and the second high dynamic range image Fusion to obtain the fused image. The above process can restore some image details lost due to overexposure of the small field of view image by performing high dynamic range fusion processing on the small field of view image, thereby improving the clarity of the fused image.
在本申请的一个实施例中,将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像,可以包括:In an embodiment of the present application, fusing the at least two frames of second images with different exposures into a second high dynamic range image may include:
根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重;According to the image features of each pixel in the first high dynamic range image, calculate the image features of each pixel in the second high dynamic range image from the weights of the second images;
根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。According to the weights of the image features of each pixel in the second high dynamic range image from the respective second images, fusing the at least two frames of second images with different exposures into the second high dynamic range image range image.
利用第一高动态范围图像的图像特征对融合各个第二图像的过程进行HDR效果的引导,能够使得获得的第二高动态范围图像和第一高动态范围图像在重叠区域具有相同或相近的亮度等图像特征,这样将两幅高动态范围图像融合后获得的图像也会具有和第一高动态范围图像相同或相近的图像特征。该过程主要可以根据第一高动态范围图像中每个像素点的图像特征,计算得到第二高动态范围图像中每个像素点的图像特征分别来自于各个第二图像的权重,然后按照计算得到的权重将各个第二图像融合为第二高动态范围图像。Using the image features of the first high dynamic range image to guide the process of fusing the second images with HDR effects, so that the obtained second high dynamic range image and the first high dynamic range image have the same or similar brightness in the overlapping area and other image features, so that the image obtained after fusing the two high dynamic range images will also have the same or similar image features as the first high dynamic range image. This process can be mainly based on the image features of each pixel in the first high dynamic range image, and the image features of each pixel in the second high dynamic range image are calculated from the weights of each second image, and then calculated according to The weights of are used to fuse the respective second images into a second high dynamic range image.
在本申请的一个实施例中,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In one embodiment of the present application, according to the image features of each pixel in the first high dynamic range image, the image features of each pixel in the second high dynamic range image are obtained from the respective The weights of the second image may include:
根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;According to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, calculate and obtain each pixel in the first high dynamic range image The image features of each come from the weights of each of the first images;
根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重。According to the weights that the image features of each pixel in the first high dynamic range image come from each of the first images, it is calculated that the image features of each pixel in the second high dynamic range image come from weights for each of the second images.
根据第一高动态范围图像中每个像素点的图像特征以及该第一高动态范围图像对应的图像高动态范围融合算法,可以计算得到各个第一图像的融合权重,然后再据此计算得到第二高动态范围图像中每个像素点的图像特征分别来自于各个第二图像的权重。在计算权重时,具体可以采用权重复用的方式或者图像特征复用的方式。According to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, the fusion weight of each first image can be calculated, and then calculated accordingly to obtain the first high dynamic range image. The image features of each pixel in the two high dynamic range images come from the weights of the second images respectively. When calculating the weight, specifically, a weight reuse method or an image feature multiplexing method may be used.
进一步的,根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:Further, the image features of each pixel in the second high dynamic range image are calculated according to the weights of the image features of each pixel in the first high dynamic range image from each of the first images. The weights from each of the second images may include:
针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。For any target pixel in the second high dynamic range image, the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images The weights are determined as the weights of the image features of the target pixel points from the respective second images.
这是一种权重复用的实现方式,也即将第一高动态范围图像的融合权重复用到第二高动态范围图像的融合过程中。在各个第一图像和各个第二图像的图像特征相差不多的情况下,采用权重复用的方式,即可获得图像特征和第一高动态范围图像相近的第二高动态范围图像。This is an implementation manner of weight reuse, that is, the fusion weight of the first high dynamic range image is reused in the fusion process of the second high dynamic range image. In the case that the image characteristics of each first image and each second image are similar, a second high dynamic range image having image characteristics similar to that of the first high dynamic range image can be obtained by using weight reuse.
在本申请的一个实施例中,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In one embodiment of the present application, according to the image features of each pixel in the first high dynamic range image, the image features of each pixel in the second high dynamic range image are obtained from the respective The weights of the second image may include:
针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述 第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。For any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
这是一种图像特征复用的实现方式,也即以两幅高动态范围图像中各个对应像素点的图像特征相同作为已知条件,计算得到各个第二图像的融合权重。即便各个第一图像和各个第二图像的图像特征差别较大,采用图像特征复用的方式,也能获得图像特征和第一高动态范围图像相同的第二高动态范围图像。This is an implementation of image feature multiplexing, that is, the fusion weight of each second image is calculated by taking the same image feature of each corresponding pixel point in the two high dynamic range images as a known condition. Even if the image features of each first image and each second image are quite different, the second high dynamic range image with the same image features as the first high dynamic range image can be obtained by using image feature multiplexing.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像和第二曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量;根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:Further, the at least two frames of second images with different exposures include a first exposure image and a second exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image; according to the target The image features of the corresponding pixels of the pixels in the first high dynamic range image and the image features of the corresponding pixels of the target pixels in each of the second images are calculated to obtain the image of the target pixels The features come from the weights of each of the second images, and may include:
根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像和所述第二曝光图像的权重:The image features of the target pixel points are calculated according to the following formula from the weights of the first exposure image and the second exposure image:
A*X+B*Y=PA*X+B*Y=P
X+Y=1X+Y=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel The image feature of the corresponding pixel point in the first exposure image, B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image, and P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
在上述公式中,A、B、P都是已知值,故可以计算得到权重X和权重Y。In the above formula, A, B, and P are all known values, so the weight X and weight Y can be calculated.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量,所述第二曝光图像的曝光量大于所述第三曝光图像的曝光量;根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:Further, the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image; according to the image feature of the corresponding pixel point of the target pixel point in the first high dynamic range image and the target pixel point In the image features of the corresponding pixels in each of the second images, calculating the image features of the target pixels from the weights of each of the second images may include:
根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,设置所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重中的任意一个权重为设定值;According to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image respectively from the first images, set the image features of the target pixel to come from the first high dynamic range image respectively. Any one of the weights of an exposure image, the second exposure image and the third exposure image is a set value;
根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重:The image features of the target pixel point are calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image:
A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
X+Y+Z=1X+Y+Z=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于所述第三曝光图像的权重,A表示所述目标像素点在所述第一曝 光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,C表示所述目标像素点在所述第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and Z indicates that the target pixel The image feature of the point comes from the weight of the third exposure image, A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image, and B represents the image feature of the target pixel point in the second exposure image. The image feature of the corresponding pixel in the exposure image, C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the target pixel in the first high dynamic range image The image features of the corresponding pixels in .
在上述公式中,A、B、C和P都是已知值,但由于具有三个未知量X、Y和Z,故还需要引入额外的限定条件,方可计算得到X、Y和Z的数值。具体的,可以根据目标像素点在第一高动态范围图像中的对应像素点的图像特征分别来自于各个第一图像的权重,设置目标像素点的图像特征分别来自于第一曝光图像、第二曝光图像和第三曝光图像的权重中的任意一个权重为设定值。In the above formula, A, B, C and P are all known values, but since there are three unknown quantities X, Y and Z, it is necessary to introduce additional restrictive conditions to calculate the values of X, Y and Z value. Specifically, according to the weights that the image features of the corresponding pixel points of the target pixel points in the first high dynamic range image come from the respective first images, the image features of the target pixel points come from the first exposure image, the second exposure image, and the second exposure image respectively. Any one of the weights of the exposure image and the third exposure image is a set value.
在本申请的一个实施例中,获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,可以包括:In an embodiment of the present application, acquiring at least two frames of first images with different exposures captured by the first camera may include:
通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a first reference exposure by shooting with the first camera, where the first reference exposure is the exposure corresponding to the preview stream of the first camera;
通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the first reference exposure by shooting with the first camera;
将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像;Determining the images with the exposure of more than one frame greater than the first reference exposure and the images with the exposure of more than one frame less than the first reference exposure as the first images of the at least two frames with different exposures ;
获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,可以包括:Acquiring at least two frames of second images with different exposures captured by the second camera may include:
通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a second reference exposure through the second camera, where the second reference exposure is the exposure corresponding to the preview stream of the second camera;
通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the second reference exposure by shooting with the second camera;
将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。Determining the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is smaller than the second reference exposure amount for more than one frame as the second image of the at least two frames with different exposure amounts .
摄像头在拍摄图像时,会获取相应的预览流,即用户打开相机后电子设备的显示界面显示的拍照预览界面所对应的数据流,该预览流对应的曝光量一般是相机设置好的默认值。可以将预览流对应的曝光量作为基准,在此基准上提高一定比例的曝光量,拍摄得到一帧以上曝光量较大的图像,以及在此基准上降低一定比例的曝光量,拍摄得到一帧以上曝光量较小的图像。When the camera captures images, it will obtain the corresponding preview stream, that is, the data stream corresponding to the photo preview interface displayed on the display interface of the electronic device after the user turns on the camera. The exposure corresponding to the preview stream is generally the default value set by the camera. You can use the exposure amount corresponding to the preview stream as a benchmark, increase the exposure amount by a certain percentage on this benchmark, and obtain an image with more than one frame of high exposure, and reduce the exposure amount by a certain percentage on this benchmark, and obtain a frame by shooting Image above with less exposure.
第二方面,本申请实施例提供了一种图像融合装置,包括:In a second aspect, the embodiment of the present application provides an image fusion device, including:
图像获取模块,用于获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,其中,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第一图像包含所述第二图像;An image acquisition module, configured to acquire at least two frames of first images with different exposures captured by the first camera, and acquire at least two frames of second images with different exposures captured by the second camera, wherein the first camera The field of view angle is greater than the field of view angle of the second camera, and the first image includes the second image;
高动态范围处理模块,用于将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;A high dynamic range processing module, configured to fuse the at least two frames of first images with different exposures into a first high dynamic range image, and fuse the at least two frames of second images with different exposures into a second high dynamic range range image;
图像融合模块,用于对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。An image fusion module, configured to fuse the first high dynamic range image and the second high dynamic range image to obtain a fused image.
在本申请的一个实施例中,所述高动态范围处理模块可以包括:In one embodiment of the present application, the high dynamic range processing module may include:
融合权重计算单元,用于根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二 图像的权重;A fusion weight calculation unit, configured to calculate, according to the image features of each pixel in the first high dynamic range image, that the image features of each pixel in the second high dynamic range image come from each of the first high dynamic range images. The weight of the second image;
高动态范围融合单元,用于根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。A high dynamic range fusion unit, configured to combine the at least two frames of second images with different exposures according to the weights of the image features of each pixel in the second high dynamic range image from each of the second images fused into the second high dynamic range image.
在本申请的一个实施例中,所述融合权重计算单元可以包括:In one embodiment of the present application, the fusion weight calculation unit may include:
第一融合权重计算子单元,用于根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;The first fusion weight calculation subunit is used to calculate and obtain the first high dynamic range image according to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image. The image features of each pixel in a high dynamic range image respectively come from the weights of each of the first images;
第二融合权重计算子单元,用于根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重。The second fusion weight calculation subunit is used to calculate and obtain the weights in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images. The image features of each pixel point come from the weights of the second images respectively.
进一步的,所述第二融合权重计算子单元具体可以用于:针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。Further, the second fusion weight calculation subunit can be specifically configured to: for any target pixel in the second high dynamic range image, combine the target pixel in the first high dynamic range image The image features of the corresponding pixel points come from the weights of each of the first images, and the image features determined as the target pixel points come from the weights of each of the second images.
在本申请的一个实施例中,所述融合权重计算单元可以包括:In one embodiment of the present application, the fusion weight calculation unit may include:
第三融合权重计算子单元,用于针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。The third fusion weight calculation subunit is configured to, for any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the calculated image features of the target pixel are respectively derived from the weights of each of the second images, wherein the target pixel The image feature of the point is equal to the image feature of the corresponding pixel point of the target pixel point in the first high dynamic range image.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像和第二曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量;所述第三融合权重计算子单元具体可以用于:根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像和所述第二曝光图像的权重:Further, the at least two frames of second images with different exposures include a first exposure image and a second exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image; the third The fusion weight calculation subunit can be specifically configured to: calculate the weights of the image features of the target pixel from the first exposure image and the second exposure image respectively according to the following formula:
A*X+B*Y=PA*X+B*Y=P
X+Y=1X+Y=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel The image feature of the corresponding pixel point in the first exposure image, B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image, and P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量,所述第二曝光图像的曝光量大于所述第三曝光图像的曝光量;所述第三融合权重计算子单元具体可以包括:Further, the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image; the third fusion weight calculation subunit may specifically include:
权重设定子单元,用于根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,设置所述目标像素点的图像 特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重中的任意一个权重为设定值;A weight setting subunit, configured to set the weight of the target pixel according to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image from each of the first images. Image features are respectively from any one of the weights of the first exposure image, the second exposure image and the third exposure image as a set value;
公式计算子单元,用于根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重:The formula calculation subunit is used to calculate the weights of the image features of the target pixel from the first exposure image, the second exposure image and the third exposure image respectively according to the following formula:
A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
X+Y+Z=1X+Y+Z=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于所述第三曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,C表示所述目标像素点在所述第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and Z indicates that the target pixel The image feature of the point comes from the weight of the third exposure image, A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image, and B represents the image feature of the target pixel point in the second exposure image. The image feature of the corresponding pixel in the exposure image, C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the target pixel in the first high dynamic range image The image features of the corresponding pixels in .
在本申请的一个实施例中,所述图像获取模块可以包括:In one embodiment of the present application, the image acquisition module may include:
第一图像拍摄单元,用于通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;The first image capturing unit is configured to capture more than one frame of images with an exposure greater than a first reference exposure through the first camera, and the first reference exposure is the exposure corresponding to the preview stream of the first camera ;
第二图像拍摄单元,用于通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;The second image capturing unit is configured to capture more than one frame of images with an exposure less than the first reference exposure through the first camera;
第一图像确定单元,用于将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像;A first image determining unit, configured to determine the image with an exposure amount of more than one frame greater than the first reference exposure amount and the image with an exposure amount of more than one frame less than the first reference exposure amount as the at least two frame the first image with different exposures;
第三图像拍摄单元,用于通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;The third image capturing unit is configured to obtain more than one frame of images with an exposure greater than a second reference exposure through the second camera, and the second reference exposure is the exposure corresponding to the preview stream of the second camera ;
第四图像拍摄单元,用于通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;A fourth image capturing unit, configured to capture more than one frame of images with an exposure less than the second reference exposure through the second camera;
第二图像确定单元,用于将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。The second image determining unit is configured to determine the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is less than the second reference exposure amount for more than one frame as the at least two Frame a second image with a different exposure.
第三方面,本申请实施例提供了一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,所述电子设备实现如下图像融合方法:In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor. When the processor executes the computer program, , the electronic device implements the following image fusion method:
获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,其中,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第一图像包含所述第二图像;Acquiring at least two frames of first images with different exposures captured by the first camera, and acquiring at least two frames of second images with different exposures captured by the second camera, wherein the angle of view of the first camera is larger than the The angle of view of the second camera, the first image includes the second image;
将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;fusing the at least two frames of first images with different exposures into a first high dynamic range image, and fusing the at least two frames of second images with different exposures into a second high dynamic range image;
对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。The first high dynamic range image and the second high dynamic range image are fused to obtain a fused image.
在本申请的一个实施例中,所述电子设备将所述至少两帧不同曝光量的第二图像 融合为第二高动态范围图像,可以包括:In one embodiment of the present application, the electronic device fuses the at least two frames of second images with different exposures into a second high dynamic range image, which may include:
根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重;According to the image features of each pixel in the first high dynamic range image, calculate the image features of each pixel in the second high dynamic range image from the weights of the second images;
根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。According to the weights of the image features of each pixel in the second high dynamic range image from the respective second images, fusing the at least two frames of second images with different exposures into the second high dynamic range image range image.
在本申请的一个实施例中,所述电子设备根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In an embodiment of the present application, the electronic device calculates the image features of each pixel in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image, respectively Weights from each of said second images may include:
根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;According to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, calculate and obtain each pixel in the first high dynamic range image The image features of each come from the weights of each of the first images;
根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重。According to the weights that the image features of each pixel in the first high dynamic range image come from each of the first images, it is calculated that the image features of each pixel in the second high dynamic range image come from weights for each of the second images.
进一步的,所述电子设备根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:Further, the electronic device calculates the weight of each pixel in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images. The image features of the points come from the weights of each of the second images, which may include:
针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。For any target pixel in the second high dynamic range image, the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images The weights are determined as the weights of the image features of the target pixel points from the respective second images.
在本申请的一个实施例中,所述电子设备根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In an embodiment of the present application, the electronic device calculates the image features of each pixel in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image, respectively Weights from each of said second images may include:
针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。For any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像和第二曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量;Further, the at least two frames of second images with different exposures include a first exposure image and a second exposure image, and the exposure of the first exposure image is greater than the exposure of the second exposure image;
所述电子设备根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:The electronic device calculates according to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images Obtaining the image features of the target pixel points respectively from the weights of each of the second images may include:
根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像和所述第二曝光图像的权重:The image features of the target pixel points are calculated according to the following formula from the weights of the first exposure image and the second exposure image:
A*X+B*Y=PA*X+B*Y=P
X+Y=1X+Y=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,A表示所述目标像素 点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel The image feature of the corresponding pixel point in the first exposure image, B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image, and P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量,所述第二曝光图像的曝光量大于所述第三曝光图像的曝光量;Further, the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image;
根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:According to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the target is calculated and obtained. The image features of the pixels come from the weights of each of the second images, which may include:
根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,设置所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重中的任意一个权重为设定值;According to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image respectively from the first images, set the image features of the target pixel to come from the first high dynamic range image respectively. Any one of the weights of an exposure image, the second exposure image and the third exposure image is a set value;
根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重:The image features of the target pixel point are calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image:
A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
X+Y+Z=1X+Y+Z=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于所述第三曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,C表示所述目标像素点在所述第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and Z indicates that the target pixel The image feature of the point comes from the weight of the third exposure image, A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image, and B represents the image feature of the target pixel point in the second exposure image. The image feature of the corresponding pixel in the exposure image, C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the target pixel in the first high dynamic range image The image features of the corresponding pixels in .
在本申请的一个实施例中,所述电子设备获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,可以包括:In an embodiment of the present application, the electronic device acquiring at least two frames of first images with different exposures captured by the first camera may include:
通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a first reference exposure by shooting with the first camera, where the first reference exposure is the exposure corresponding to the preview stream of the first camera;
通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the first reference exposure by shooting with the first camera;
将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像;Determining the images with the exposure of more than one frame greater than the first reference exposure and the images with the exposure of more than one frame less than the first reference exposure as the first images of the at least two frames with different exposures ;
所述电子设备获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,可以包括:The electronic device acquires at least two frames of second images with different exposures captured by the second camera, which may include:
通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a second reference exposure through the second camera, where the second reference exposure is the exposure corresponding to the preview stream of the second camera;
通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the second reference exposure by shooting with the second camera;
将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有 计算机程序,所述计算机程序被执行时实现如本申请实施例第一方面提出的图像融合方法。Determining the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is smaller than the second reference exposure amount for more than one frame as the second image of the at least two frames with different exposure amounts . In the fourth aspect, the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed, the image fusion as proposed in the first aspect of the embodiment of the present application is realized method.
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如本申请实施例第一方面提出的图像融合方法。In the fifth aspect, the embodiment of the present application provides a computer program product, which, when the computer program product runs on the electronic device, causes the electronic device to execute the image fusion method as proposed in the first aspect of the embodiment of the present application.
附图说明Description of drawings
图1是本申请实施例提供的一种电子设备的硬件结构图;FIG. 1 is a hardware structural diagram of an electronic device provided by an embodiment of the present application;
图2是本申请实施例提供的一种图像融合方法的流程图;FIG. 2 is a flow chart of an image fusion method provided by an embodiment of the present application;
图3是本申请实施例采用的两个不同视场角摄像头的拍摄范围示意图;Fig. 3 is a schematic diagram of the shooting range of two cameras with different field of view angles adopted in the embodiment of the present application;
图4是本申请实施例提供的一幅大视场角图像和对应的小视场角图像的示意图;Fig. 4 is a schematic diagram of a large field of view image and a corresponding small field of view image provided by the embodiment of the present application;
图5是本申请实施例提供的一种图像融合方法的操作原理示意图;FIG. 5 is a schematic diagram of an operation principle of an image fusion method provided in an embodiment of the present application;
图6是图5中主路长曝光帧、主路短曝光帧和主路高动态范围图像的效果示意图;Fig. 6 is a schematic diagram of the effect of the long exposure frame of the main path, the short exposure frame of the main path and the high dynamic range image of the main path in Fig. 5;
图7是图5中辅路长曝光帧、辅路短曝光帧和辅路高动态范围图像的效果示意图;Fig. 7 is a schematic diagram of the effect of the long exposure frame of the auxiliary road, the short exposure frame of the auxiliary road and the high dynamic range image of the auxiliary road in Fig. 5;
图8是将图6中的主路高动态范围图像和图7中的辅路高动态范围图像融合的效果示意图;Fig. 8 is a schematic diagram of the effect of fusing the high dynamic range image of the main road in Fig. 6 and the high dynamic range image of the auxiliary road in Fig. 7;
图9是本申请实施例提供的一种图像融合装置的结构图;FIG. 9 is a structural diagram of an image fusion device provided in an embodiment of the present application;
图10是本申请实施例提供的一种电子设备的示意图。Fig. 10 is a schematic diagram of an electronic device provided by an embodiment of the present application.
具体实施方式detailed description
以下描述中,为了说明而不是为了限定,提出了诸如特定装置结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as specific device structures and techniques are presented so as to thoroughly understand the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。此外,本申请实施例中提到的“多个”应当被解释为两个或两个以上。The terms used in the following examples are for the purpose of describing particular examples only, and are not intended to limit the application. As used in the specification and appended claims of this application, the singular expressions "a", "an", "said", "above", "the" and "this" are intended to also Expressions such as "one or more" are included unless the context clearly dictates otherwise. It should also be understood that in the embodiments of the present application, "one or more" refers to one, two or more than two; "and/or" describes the association relationship of associated objects, indicating that there may be three types of relationships; for example, A and/or B may mean: A exists alone, A and B exist simultaneously, and B exists alone, wherein A and B may be singular or plural. The character "/" generally indicates that the contextual objects are an "or" relationship. In addition, the "plurality" mentioned in the embodiments of the present application should be interpreted as two or more.
一个电子设备(例如手机)通常可以设有多个不同视场角的摄像头,例如普通摄像头、长焦摄像头和广角摄像头等。在某些特定的摄影场合,可以采用多摄像头联合拍照的方式,以提升照片的质量,例如:可以将长焦摄像头拍摄的小视场角图像融合到普通摄像头拍摄的大视场角图像中,以提升大视场角图像相应区域的清晰度。而为了获取更多的图像细节,人们还会对大视场角图像执行高动态范围(High-Dynamic Range,HDR)融合处理,得到对应的高动态范围图像,然后再将该高动态范围图像与小视场角图像融合。然而,在逆光等场景下,长焦摄像头拍摄到的小视场角图像的部分区域会出现过度曝光而损失图像细节,影响融合后图像的清晰度。An electronic device (such as a mobile phone) can usually be provided with multiple cameras with different viewing angles, such as a normal camera, a telephoto camera, and a wide-angle camera. In some specific photography occasions, multi-camera joint photography can be used to improve the quality of photos. Improve the clarity of the corresponding area of the image with a large field of view. In order to obtain more image details, people will also perform high-dynamic range (High-Dynamic Range, HDR) fusion processing on large-field-of-view images to obtain corresponding high-dynamic-range images, and then combine the high-dynamic-range images with Small field of view image fusion. However, in scenes such as backlighting, some areas of the small field of view image captured by the telephoto camera will be overexposed and the image details will be lost, which will affect the clarity of the fused image.
针对上述问题,本申请提出一种图像融合方法,对小视场角图像也进行高动态范围融合处理,然后再与大视场角图像的高动态范围图像融合,得到融合后的图像。通过这样设置,能够恢复小视场角图像出现过度曝光而损失的部分图像细节,从而提升融合后图像的清晰度,本申请具体的实施方案可参见下文所述的各个实施例。In view of the above problems, this application proposes an image fusion method, which performs high dynamic range fusion processing on the small field of view image, and then fuses with the high dynamic range image of the large field of view image to obtain the fused image. With this setting, partial image details lost due to overexposure of the small field of view image can be restored, thereby improving the clarity of the fused image. For specific implementations of the present application, please refer to the various embodiments described below.
本申请提出的图像融合方法可以应用于带有至少两个不同视场角摄像头的各类电子设备,比如手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能家居设备等,本申请实施例对该电子设备的具体类型不作任何限制。The image fusion method proposed in this application can be applied to various electronic devices with at least two cameras with different field of view, such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) equipment, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), smart home equipment, etc. The specific type of equipment is not limited in any way.
以该电子设备为手机为例,图1示出的是本申请实施例提供的手机的部分结构的框图。参考图1,手机包括:射频(Radio Frequency,RF)电路101、存储器102、输入单元103、显示单元104、传感器105、音频电路106、无线保真(wireless fidelity,WiFi)模块107、处理器108、电源109、普通摄像头110以及长焦摄像头111等部件。本领域技术人员可以理解,图1中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Taking the electronic device as a mobile phone as an example, FIG. 1 shows a block diagram of a partial structure of the mobile phone provided by the embodiment of the present application. With reference to Fig. 1, mobile phone comprises: radio frequency (Radio Frequency, RF) circuit 101, memory 102, input unit 103, display unit 104, sensor 105, audio circuit 106, wireless fidelity (wireless fidelity, WiFi) module 107, processor 108 , power supply 109, common camera 110 and telephoto camera 111 and other components. Those skilled in the art can understand that the structure of the mobile phone shown in FIG. 1 does not constitute a limitation to the mobile phone, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
下面结合图1对手机的各个构成部件进行具体的介绍:The following is a specific introduction to each component of the mobile phone in combination with Figure 1:
RF电路101可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器108处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯装置(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。The RF circuit 101 can be used for sending and receiving information or receiving and sending signals during a call. In particular, after receiving the downlink information of the base station, it is processed by the processor 108; in addition, the designed uplink data is sent to the base station. Generally, an RF circuit includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuit 101 can also communicate with networks and other devices through wireless communication. The above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), email, Short Messaging Service (SMS), etc.
存储器102可用于存储软件程序以及模块,处理器108通过运行存储在存储器102的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器102可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作装置、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器102可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 102 can be used to store software programs and modules, and the processor 108 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 102 . The memory 102 can mainly include a program storage area and a data storage area, wherein the program storage area can store operating devices, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); Data created by the use of mobile phones (such as audio data, phonebook, etc.), etc. In addition, the memory 102 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
输入单元103可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元103可包括触控面板1031以及其他输入设备1032。触控面板1031,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1031上或在触控面板1031附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1031可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触 摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器108,并能接收处理器108发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1031。除了触控面板1031,输入单元103还可以包括其他输入设备1032。具体地,其他输入设备1032可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 103 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile phone. Specifically, the input unit 103 may include a touch panel 1031 and other input devices 1032 . The touch panel 1031, also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1031 or near the touch panel 1031). operation), and drive the corresponding connection device according to the preset program. Optionally, the touch panel 1031 may include two parts, a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and sends it to the to the processor 108, and can receive and execute commands sent by the processor 108. In addition, the touch panel 1031 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1031 , the input unit 103 may also include other input devices 1032 . Specifically, other input devices 1032 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, joysticks, and the like.
显示单元104可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元104可包括显示面板1041,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1041。进一步的,触控面板1031可覆盖显示面板1041,当触控面板1031检测到在其上或附近的触摸操作后,传送给处理器108以确定触摸事件的类型,随后处理器108根据触摸事件的类型在显示面板1041上提供相应的视觉输出。虽然在图1中,触控面板1031与显示面板1041是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1031与显示面板1041集成而实现手机的输入和输出功能。The display unit 104 can be used to display information input by or provided to the user and various menus of the mobile phone. The display unit 104 may include a display panel 1041. Optionally, the display panel 1041 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like. Further, the touch panel 1031 may cover the display panel 1041, and when the touch panel 1031 detects a touch operation on or near it, it transmits to the processor 108 to determine the type of the touch event, and then the processor 108 according to the touch event The type provides a corresponding visual output on the display panel 1041 . Although in FIG. 1 , the touch panel 1031 and the display panel 1041 are used as two independent components to realize the input and input functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 can be integrated to form a mobile phone. Realize the input and output functions of the mobile phone.
手机还可包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1041的亮度,接近传感器可在手机移动到耳边时,关闭显示面板1041和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The handset may also include at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of the ambient light, and the proximity sensor may turn off the display panel 1041 and/or when the mobile phone is moved to the ear. or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the application of mobile phone posture (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. repeat.
音频电路106、扬声器1061,传声器1062可提供用户与手机之间的音频接口。音频电路106可将接收到的音频数据转换后的电信号,传输到扬声器1061,由扬声器1061转换为声音信号输出;另一方面,传声器1062将收集的声音信号转换为电信号,由音频电路106接收后转换为音频数据,再将音频数据输出处理器108处理后,经RF电路101以发送给比如另一手机,或者将音频数据输出至存储器102以便进一步处理。The audio circuit 106, the speaker 1061, and the microphone 1062 can provide an audio interface between the user and the mobile phone. The audio circuit 106 can transmit the electrical signal converted from the received audio data to the loudspeaker 1061, and the loudspeaker 1061 converts it into a sound signal output; After being received, it is converted into audio data, and then the audio data is processed by the output processor 108, and then sent to another mobile phone through the RF circuit 101, or the audio data is output to the memory 102 for further processing.
WiFi属于短距离无线传输技术,手机通过WiFi模块107可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块107,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变申请的本质的范围内而省略。WiFi is a short-distance wireless transmission technology. The mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 107, which provides users with wireless broadband Internet access. Although FIG. 1 shows the WiFi module 107, it can be understood that it is not an essential component of the mobile phone, and can be completely omitted as required without changing the essence of the application.
处理器108是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器102内的软件程序和/或模块,以及调用存储在存储器102内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器108可包括一个或多个处理单元;优选的,处理器108可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作装置、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器108中。The processor 108 is the control center of the mobile phone, and uses various interfaces and lines to connect various parts of the entire mobile phone. By running or executing software programs and/or modules stored in the memory 102, and calling data stored in the memory 102, execution Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole. Optionally, the processor 108 may include one or more processing units; preferably, the processor 108 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating devices, user interfaces and application programs, etc. , the modem processor mainly handles wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 108 .
手机还包括给各个部件供电的电源109(比如电池),优选的,电源可以通过电源管理装置与处理器108逻辑相连,从而通过电源管理装置实现管理充电、放电、以及功耗管理等功能。The mobile phone also includes a power supply 109 (such as a battery) for supplying power to each component. Preferably, the power supply can be logically connected to the processor 108 through a power management device, so that functions such as charging, discharging, and power consumption management can be realized through the power management device.
手机还包括至少两个不同视场角的摄像头,例如其中一个为普通摄像头110,一个为长焦摄像头111,该普通摄像头110和该长焦摄像头111设置在手机的同一个面,以便实现联合拍照。尽管未示出,手机还可以包括红外摄像头、高光谱摄像头和广角摄像头等其他类型的摄像头。可选地,摄像头在手机上的位置可以为前置的,也可以为后置的,本申请实施例对此不作限定。The mobile phone also includes at least two cameras with different viewing angles, for example, one of them is a normal camera 110, and the other is a telephoto camera 111. . Although not shown, the phone may also include other types of cameras such as infrared cameras, hyperspectral cameras, and wide-angle cameras. Optionally, the position of the camera on the mobile phone may be front or rear, which is not limited in this embodiment of the present application.
另外,尽管未示出,手机还可以包括蓝牙模块等,在此不再赘述。In addition, although not shown, the mobile phone may also include a bluetooth module, etc., which will not be repeated here.
图2示出了本申请实施例提供的一种图像融合方法的流程图,包括:Figure 2 shows a flow chart of an image fusion method provided by an embodiment of the present application, including:
201、电子设备获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像;201. The electronic device acquires at least two frames of first images with different exposures captured by the first camera, and acquires at least two frames of second images with different exposures captured by the second camera;
该电子设备带有至少两个不同视场角的摄像头,即第一摄像头和第二摄像头,其中,第一摄像头的视场角大于第二摄像头的视场角。本申请实施例不对第一摄像头和第二摄像头的具体类型做限定,例如,若第一摄像头为普通摄像头,则第二摄像头可以为长焦摄像头;若第一摄像头为广角摄像头,则第二摄像头可以为普通摄像头或者长焦摄像头,以此类推。为了实现图像融合,第一摄像头和第二摄像头应当设于该电子设备的同一个面,且在拍摄图像时保持相同或相近的拍摄角度,使得第一摄像头的拍摄范围涵盖第二摄像头的拍摄范围,也即第一摄像头拍摄的第一图像包含第二摄像头拍摄的第二图像。在本申请文件中,第一图像可以记作大视场角图像,第二图像可以记作小视场角图像。The electronic device has at least two cameras with different viewing angles, namely a first camera and a second camera, wherein the viewing angle of the first camera is larger than that of the second camera. The embodiment of the present application does not limit the specific types of the first camera and the second camera. For example, if the first camera is an ordinary camera, the second camera can be a telephoto camera; if the first camera is a wide-angle camera, the second camera can be It can be a normal camera or a telephoto camera, and so on. In order to achieve image fusion, the first camera and the second camera should be set on the same surface of the electronic device, and keep the same or similar shooting angles when taking images, so that the shooting range of the first camera covers the shooting range of the second camera , that is, the first image captured by the first camera includes the second image captured by the second camera. In this application document, the first image may be recorded as an image with a large viewing angle, and the second image may be recorded as an image with a small viewing angle.
摄像头可以通过调整光圈大小和曝光时间等参数,控制拍摄得到的图像的曝光量,本申请实施例通过调整第一摄像头的参数,拍摄得到至少两帧不同曝光量的第一图像,以及通过调整第二摄像头的参数,拍摄得到至少两帧不同曝光量的第二图像。The camera can control the exposure of the captured image by adjusting parameters such as aperture size and exposure time. In the embodiment of the present application, by adjusting the parameters of the first camera, at least two frames of the first image with different exposures can be obtained, and by adjusting the parameters of the second The parameters of the second camera are taken to obtain at least two frames of second images with different exposures.
在本申请的一种实现方式中,获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,可以包括:In an implementation manner of the present application, acquiring at least two frames of first images with different exposures captured by the first camera may include:
(1)通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;(1) Obtaining more than one frame of images with an exposure greater than a first reference exposure by shooting with the first camera, where the first reference exposure is the exposure corresponding to the preview stream of the first camera;
(2)通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;(2) Obtaining more than one frame of images with an exposure amount less than the first reference exposure amount captured by the first camera;
(3)将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像。(3) Determining the images with the exposure of more than one frame greater than the first reference exposure and the images with the exposure of more than one frame less than the first reference exposure as the at least two frames with different exposures first image.
在调整曝光量的时候,可以获取第一摄像头的预览流对应的曝光量作为基准值,即第一基准曝光量。摄像头在拍摄图像时,会获取相应的预览流,即用户打开相机后电子设备的拍照预览界面所对应的数据流,该预览流对应的曝光量一般是相机设置好的默认值。以预览流对应的曝光量作为基准,在此基准上提高一定比例的曝光量,拍摄得到一帧以上曝光量较大的图像(可以称作长曝光帧),以及在此基准上降低一定比例的曝光量,拍摄得到一帧以上曝光量较小的图像(可以称作短曝光帧)。例如,假设第一摄像头的预览流对应的曝光量为M,则可以通过调整摄像头参数拍摄得到一 帧曝光量为4M的长曝光帧,以及一帧曝光量为1/4M的短曝光帧,该长曝光帧和该短曝光帧即为获取到的两帧不同曝光量的第一图像。When adjusting the exposure, the exposure corresponding to the preview stream of the first camera may be acquired as a reference value, that is, the first reference exposure. When the camera captures an image, it will obtain the corresponding preview stream, that is, the data stream corresponding to the photo preview interface of the electronic device after the user turns on the camera. The exposure corresponding to the preview stream is generally the default value set by the camera. Take the exposure corresponding to the preview stream as a benchmark, increase the exposure by a certain percentage on this benchmark, and shoot more than one frame with a larger exposure (which can be called a long exposure frame), and reduce the exposure by a certain percentage on this benchmark Exposure, more than one frame of images with less exposure (may be referred to as short exposure frames) is obtained by shooting. For example, assuming that the exposure corresponding to the preview stream of the first camera is M, you can obtain a long exposure frame with an exposure of 4M and a short exposure frame with an exposure of 1/4M by adjusting the camera parameters. The long-exposure frame and the short-exposure frame are two acquired first images with different exposures.
获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,可以包括:Acquiring at least two frames of second images with different exposures captured by the second camera may include:
(1)通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;(1) Obtaining more than one frame of images with an exposure greater than a second reference exposure through the second camera, where the second reference exposure is the exposure corresponding to the preview stream of the second camera;
(2)通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;(2) Obtaining more than one frame of images with an exposure amount less than the second reference exposure amount captured by the second camera;
(3)将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。(3) Determining the images with the exposure of more than one frame greater than the second reference exposure and the images with the exposure of more than one frame less than the second reference exposure as the at least two frames with different exposures second image.
可以采用与第一摄像头相同的方法,获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像。需要说明的是,第一图像和第二图像的数量可以相同,也可以不同;各帧第一图像的曝光量和各帧第二图像的曝光量之间也没有相应的大小限定关系。The same method as that of the first camera may be used to acquire at least two frames of second images with different exposures captured by the second camera. It should be noted that the number of the first image and the number of the second image may be the same or different; there is no corresponding size-limited relationship between the exposure amount of each frame of the first image and the exposure amount of each frame of the second image.
202、电子设备将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;202. The electronic device fuses the at least two frames of first images with different exposures into a first high dynamic range image, and fuses the at least two frames of second images with different exposures into a second high dynamic range image;
接下来,对各帧第一图像做高动态范围融合处理,得到第一高动态范围图像;以及对各帧第二图像做高动态范围融合处理,得到第二高动态范围图像。高动态范围Next, perform high dynamic range fusion processing on each frame of the first image to obtain a first high dynamic range image; and perform high dynamic range fusion processing on each frame of second image to obtain a second high dynamic range image. high dynamic range
(High-Dynamic Range,HDR)融合处理是指根据多帧不同曝光量的低动态范围图像来合成一幅高动态范围的图像,以获取更多的图像细节,提升图像清晰度。其中,曝光量较大的低动态范围图像主要用于恢复场景暗部区域的图像细节,曝光量较小的低动态范围图像主要用于恢复场景高亮区域的图像细节。(High-Dynamic Range, HDR) fusion processing refers to synthesizing a high dynamic range image based on multiple frames of low dynamic range images with different exposures, so as to obtain more image details and improve image clarity. Among them, the low dynamic range image with a large exposure is mainly used to restore the image details of the dark area of the scene, and the low dynamic range image with a small exposure is mainly used to restore the image details of the bright area of the scene.
以下简要介绍对图像进行HDR融合处理的过程:首先,获取多帧不同曝光量的低动态范围图像(针对同一时刻同一场景的图像),假设为长曝光帧、中曝光帧和短曝光帧总共3帧低动态范围图像,其中长曝光帧的曝光量>中曝光帧的曝光量>短曝光帧的曝光量,长曝光帧、中曝光帧和短曝光帧已经执行图像配准,像素点对齐的处理。然后,获取长曝光帧、中曝光帧和短曝光帧分别对应的融合权重图(融合权重图可以根据经验值人工设置,包含每个像素点对应的融合权重值)。最后,将长曝光帧和其对应的融合权重图相乘,将中曝光帧和其对应的融合权重图相乘以及将短曝光帧和其对应的融合权重图相乘,然后对3个相乘的结果进行叠加,得到对应的高动态范围图像。另外,如果针对的是RGB域的图像,则在R、G、B通道上分别进行上述HDR融合处理;如果针对的是YUV域的图像,则在Y(亮度参数)通道上进行上述HDR融合处理,UV(色度参数)通道直接复用中曝光帧的UV值。The following is a brief introduction to the process of HDR fusion processing of images: First, obtain multiple frames of low dynamic range images with different exposures (images for the same scene at the same time), assuming a total of 3 long-exposure frames, medium-exposure frames and short-exposure frames Frame low dynamic range image, in which the exposure of long exposure frame > the exposure of medium exposure frame > the exposure of short exposure frame, the long exposure frame, medium exposure frame and short exposure frame have performed image registration and pixel alignment processing . Then, obtain the fusion weight map corresponding to the long exposure frame, the medium exposure frame and the short exposure frame respectively (the fusion weight map can be manually set according to the empirical value, including the fusion weight value corresponding to each pixel). Finally, multiply the long-exposure frame and its corresponding fusion weight map, multiply the medium-exposure frame and its corresponding fusion weight map, and multiply the short-exposure frame and its corresponding fusion weight map, and then multiply the three The results are superimposed to obtain the corresponding high dynamic range image. In addition, if the image is in the RGB domain, the above-mentioned HDR fusion processing is performed on the R, G, and B channels respectively; if the image is in the YUV domain, the above-mentioned HDR fusion processing is performed on the Y (brightness parameter) channel , the UV (chromaticity parameter) channel directly multiplexes the UV value of the exposure frame.
另一方面,在将第一高动态范围图像和第二高动态范围图像融合时,如果两幅图像的亮度等图像特征存在较大的差异,则会导致获得的融合后图像也具有和第一高动态范围图像相差较大的图像特征,而在常规的图像融合场景中,通常期望获得的融合后图像具有和第一高动态范围图像相同或相近的图像特征。为了解决这个问题,可以根据该第一高动态范围图像的亮度等图像特征对融合各帧第二图像的过程进行HDR效果的引导,使得获得的第二高动态范围图像具有和第一高动态范围图像相同或相近的亮度等图像特征。根据第一高动态范围图像进行HDR效果引导的具体实施方式,可以参照下文所述。On the other hand, when the first high dynamic range image and the second high dynamic range image are fused, if there is a large difference in image features such as brightness of the two images, the obtained fused image will also have the same characteristics as the first high dynamic range image. The image features of the high dynamic range images differ greatly, and in conventional image fusion scenarios, it is generally expected that the obtained fused image has the same or similar image features as the first high dynamic range image. In order to solve this problem, the process of fusing the second images of each frame can be guided by the HDR effect according to the image characteristics such as the brightness of the first high dynamic range image, so that the obtained second high dynamic range image has the same quality as the first high dynamic range image. Image characteristics such as the same or similar brightness of the image. For a specific implementation manner of performing HDR effect guidance according to the first high dynamic range image, reference may be made to the description below.
在本申请的一种实现方式中,将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像,可以包括:In an implementation manner of the present application, fusing the at least two frames of second images with different exposures into a second high dynamic range image may include:
(1)根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重;(1) According to the image features of each pixel in the first high dynamic range image, calculate the image features of each pixel in the second high dynamic range image from the weights of the second images respectively ;
(2)根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。(2) According to the weights of the image features of each pixel in the second high dynamic range image from the respective second images, fusing the at least two frames of second images with different exposures into the first Two high dynamic range images.
对于步骤上述(1),在生成第一高动态范围图像之后,其每个像素点的图像特征(例如RGB值,亮度值等)即是已知的,故可以根据其每个像素点的图像特征分别计算得到第二高动态范围图像中每个像素点的图像特征分别来自于各个第二图像的权重,也即计算各个第二图像进行融合的权重,基本准则是使得融合后获得的第二高动态范围图像具有和第一高动态范围图像相同或相近的图像特征。For the above step (1), after the first high dynamic range image is generated, the image characteristics (such as RGB value, brightness value, etc.) of each pixel are known, so it can be based on the image of each pixel The features are respectively calculated to obtain the image features of each pixel in the second high dynamic range image from the weights of each second image, that is, to calculate the weight of each second image for fusion. The basic principle is to make the second image obtained after fusion The high dynamic range image has the same or similar image features as the first high dynamic range image.
对于步骤上述(2),在获得第二高动态范围图像中每个像素点的图像特征分别来自于各个第二图像的权重之后,即可对各个第二图像执行HDR融合处理,获得对应的第二高动态范围图像。假设第二图像包含I 1和I 2,待生成的第二高动态范围图像中某个像素点Q的图像特征来自于I 1的权重为X,来自于I 2的权重为Y,该像素点Q在I 1中对应位置像素点的图像特征为A,在I 2中对应位置像素点的图像特征为B,则在将I 1和I 2融合为第二高动态范围图像时,Q在第二高动态范围图像中的图像特征为A*X+B*Y,以此类推,可以计算得到第二高动态范围图像中每个像素点的图像特征。 For the above step (2), after obtaining the image features of each pixel in the second high dynamic range image from the weights of each second image, the HDR fusion process can be performed on each second image to obtain the corresponding first Two high dynamic range images. Assuming that the second image contains I 1 and I 2 , the image feature of a pixel point Q in the second high dynamic range image to be generated comes from the weight of I 1 is X, and the weight from I 2 is Y, the pixel point The image feature of Q corresponding to the pixel point in I1 is A, and the image feature of the corresponding pixel point in I2 is B, then when I1 and I2 are fused into the second high dynamic range image, Q is at the The image feature in the second high dynamic range image is A*X+B*Y, and by analogy, the image feature of each pixel in the second high dynamic range image can be calculated.
在本申请的一种实现方式中,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In one implementation of the present application, according to the image features of each pixel in the first high dynamic range image, the image features of each pixel in the second high dynamic range image are calculated from the respective The weight of the second image may include:
(1)根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;(1) According to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, calculate and obtain each pixel in the first high dynamic range image The image features of pixels are respectively from the weights of each of the first images;
(2)根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重。(2) According to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images, calculate the image features of each pixel in the second high dynamic range image Weights from each of the second images respectively.
在获得第一高动态范围图像后,根据第一高动态范围图像中每个像素点的图像特征以及该第一高动态范围图像对应的图像高动态范围融合算法,可以计算得到各个第一图像的融合权重,也即第一高动态范围图像中每个像素点的图像特征分别来自于各个第一图像的权重。具体的,假设各个第一图像为长曝光帧、中曝光帧和短曝光帧总共3帧低动态范围图像,其中长曝光帧的曝光量>中曝光帧的曝光量>短曝光帧的曝光量,假设某一像素点Q在长曝光帧中的图像特征为A,在中曝光帧中的图像特征为B,在短曝光帧中的图像特征为C,则可以分别采用以下公式计算得到第一高动态范围图像中像素点Q的图像特征来自于长曝光帧的权重X,来自于中曝光帧的权重Y以及来自于短曝光帧的权重Z:After obtaining the first high dynamic range image, according to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, the first image of each first image can be calculated The fusion weights, that is, the image features of each pixel in the first high dynamic range image respectively come from the weights of the first images. Specifically, it is assumed that each first image is a total of 3 frames of low dynamic range images including a long exposure frame, a medium exposure frame, and a short exposure frame, wherein the exposure amount of the long exposure frame>the exposure amount of the middle exposure frame>the exposure amount of the short exposure frame, Assuming that the image feature of a certain pixel point Q in the long exposure frame is A, the image feature in the medium exposure frame is B, and the image feature in the short exposure frame is C, then the first height can be obtained by using the following formulas respectively The image features of pixel Q in the dynamic range image come from the weight X of the long-exposure frame, the weight Y of the medium-exposure frame, and the weight Z of the short-exposure frame:
Figure PCTCN2022077713-appb-000001
Figure PCTCN2022077713-appb-000001
Figure PCTCN2022077713-appb-000002
Figure PCTCN2022077713-appb-000002
Figure PCTCN2022077713-appb-000003
Figure PCTCN2022077713-appb-000003
其中,σ可以根据经验设置为52或者其它数值,另外可以对采用上述公式计算得到的权重X、Y和Z做归一化处理,使得X+Y+Z=1。Wherein, σ can be set to 52 or other values based on experience, and the weights X, Y and Z calculated by using the above formula can be normalized so that X+Y+Z=1.
针对该第一高动态范围图像中的每个像素点,均可以采用和上述像素点Q相同的方式计算得到其图像特征分别来自于各个第一图像的权重。For each pixel in the first high dynamic range image, weights whose image features come from each first image can be calculated in the same manner as the above pixel Q.
在本申请的一种实现方式中,根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In an implementation manner of the present application, according to the weights of the image features of each pixel in the first high dynamic range image from each of the first images, the weights in the second high dynamic range image are calculated and obtained. The image features of each pixel are respectively derived from the weights of the second images, which may include:
针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。For any target pixel in the second high dynamic range image, the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images The weights are determined as the weights of the image features of the target pixel points from the respective second images.
这是一种权重复用的实现方式,也即将第一高动态范围图像的融合权重复用到第二高动态范围图像的融合过程中。例如,假设各个第一图像为第一长曝光帧、第一中曝光帧和第一短曝光帧,各个第二图像为第二长曝光帧、第二中曝光帧和第二短曝光帧,第一高动态范围图像中某个目标像素点Q的图像特征为P,Q在第一长曝光帧中的图像特征为A,在第一中曝光帧中的图像特征为B,在第一短曝光帧中的图像特征为C,在第二长曝光帧中的图像特征为D,在第二中曝光帧中的图像特征为E,在第二短曝光帧中的图像特征为F,则有P=A*X+B*Y+C*Z,其中X为Q的图像特征来自于第一长曝光帧的权重,Y为Q的图像特征来自于第一中曝光帧的权重,Z为Q的图像特征来自于第一短曝光帧的权重。那么,可以直接采用X、Y和Z分别作为第二长曝光帧、第二中曝光帧和第二短曝光帧的融合权重,则Q在第二高动态范围图像中的图像特征S=D*X+E*Y+F*Z。需要说明的是,该图像特征可以是RGB域任意一个通道的特征,也可以是YUV域中Y通道(亮度)的特征。在各个第一图像和各个第二图像的图像特征相差不多的情况下,采用权重复用的方式,即可获得图像特征和第一高动态范围图像相近的第二高动态范围图像。This is an implementation manner of weight reuse, that is, the fusion weight of the first high dynamic range image is reused in the fusion process of the second high dynamic range image. For example, assuming that each first image is a first long exposure frame, a first medium exposure frame and a first short exposure frame, and each second image is a second long exposure frame, a second medium exposure frame and a second short exposure frame, the first The image feature of a target pixel point Q in a high dynamic range image is P, the image feature of Q in the first long exposure frame is A, the image feature of Q in the first medium exposure frame is B, and the image feature of Q in the first short exposure frame is The image feature in the frame is C, the image feature in the second long exposure frame is D, the image feature in the second medium exposure frame is E, and the image feature in the second short exposure frame is F, then there is P =A*X+B*Y+C*Z, where X is the weight of the image feature of Q from the first long exposure frame, Y is the weight of the image feature of Q from the first middle exposure frame, and Z is the weight of Q Image features are derived from the weights of the first short-exposure frame. Then, X, Y, and Z can be directly used as the fusion weights of the second long exposure frame, the second medium exposure frame, and the second short exposure frame respectively, then the image feature S=D* of Q in the second high dynamic range image X+E*Y+F*Z. It should be noted that the image feature may be a feature of any channel in the RGB domain, or a feature of the Y channel (brightness) in the YUV domain. In the case that the image characteristics of each first image and each second image are similar, a second high dynamic range image having image characteristics similar to that of the first high dynamic range image can be obtained by using weight reuse.
在本申请的另一种实现方式中,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,可以包括:In another implementation manner of the present application, according to the image features of each pixel in the first high dynamic range image, the image features of each pixel in the second high dynamic range image are calculated from The weights of each of the second images may include:
针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。For any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
在某些情况下,各个第一图像和各个第二图像之间的图像特征差距较大,为了满足第二高动态范围图像的图像特征和第一高动态范围图像的图像特征相同或相近的要求,此时不宜采用权重复用的方式确定各个第二图像的融合权重。针对这种情况,可 以采用图像特征复用的实现方式,也即以两幅高动态范围图像中各个对应像素点的图像特征相同或者相近作为已知条件,计算得到各个第二图像的融合权重。In some cases, the difference in image features between each first image and each second image is relatively large, in order to meet the requirement that the image features of the second high dynamic range image and the image features of the first high dynamic range image are the same or similar , it is inappropriate to use weight reuse to determine the fusion weight of each second image at this time. In view of this situation, image feature multiplexing can be adopted, that is, the fusion weight of each second image can be calculated by using the same or similar image feature of each corresponding pixel in the two high dynamic range images as a known condition.
具体的,假设各个第二图像包括第一曝光图像和第二曝光图像,其中第一曝光图像的曝光量大于第二曝光图像的曝光量,则可以根据以下公式计算得到所述目标像素点的图像特征分别来自于第一曝光图像和第二曝光图像的权重:Specifically, assuming that each second image includes a first exposure image and a second exposure image, wherein the exposure of the first exposure image is greater than the exposure of the second exposure image, the image of the target pixel can be calculated according to the following formula The features come from the weights of the first exposure image and the second exposure image respectively:
A*X+B*Y=PA*X+B*Y=P
X+Y=1X+Y=1
其中,X表示所述目标像素点的图像特征来自于第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于第二曝光图像的权重,A表示所述目标像素点在第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在第一高动态范围图像中的对应像素点的图像特征(或者也可以是与该图像特征相近的其它数值)。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel is in the first exposure image. The image feature of the corresponding pixel in the exposure image, B represents the image feature of the corresponding pixel of the target pixel in the second exposure image, and P represents the corresponding pixel of the target pixel in the first high dynamic range image The image feature of the point (or other values close to the image feature).
在上述公式中,A、B、P都是已知值,故可以计算得到权重X和权重Y。例如,若目标像素点在第一高动态范围图像中的对应像素点的图像特征为92,目标像素点在第一曝光图像中的对应像素点的图像特征为120,目标像素点在第二曝光图像中的对应像素点的图像特征为10,则可以获得以下公式:In the above formula, A, B, and P are all known values, so the weight X and weight Y can be calculated. For example, if the image feature of the corresponding pixel of the target pixel in the first high dynamic range image is 92, the image feature of the corresponding pixel of the target pixel in the first exposure image is 120, and the target pixel is 120 in the second exposure image. The image feature of the corresponding pixel in the image is 10, then the following formula can be obtained:
120*X+10*Y=92120*X+10*Y=92
X+Y=1X+Y=1
通过计算该公式可以求出权重X=74.5%,权重Y=25.5%。显然,每个像素点都可以采用相同的方式计算得到对应的融合权重,在将第一曝光图像和第二曝光图像融合时,每个像素点都按照各自对应的融合权重进行融合,从而得到图像特征与第一高动态范围图像相同的第二高动态范围图像。By calculating this formula, the weight X=74.5% and the weight Y=25.5% can be obtained. Obviously, each pixel can be calculated in the same way to obtain the corresponding fusion weight. When the first exposure image and the second exposure image are fused, each pixel is fused according to its corresponding fusion weight to obtain the image A second high dynamic range image having the same characteristics as the first high dynamic range image.
在本申请的另一种实现方式中,假设各个第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,其中第一曝光图像的曝光量大于第二曝光图像的曝光量,第二曝光图像的曝光量大于第三曝光图像的曝光量;此时可以根据以下公式计算得到所述目标像素点的图像特征分别来自于第一曝光图像、第二曝光图像和第三曝光图像的权重:In another implementation manner of the present application, it is assumed that each second image includes a first exposure image, a second exposure image, and a third exposure image, wherein the exposure amount of the first exposure image is greater than the exposure amount of the second exposure image, and the exposure amount of the second exposure image is The exposure amount of the second exposure image is greater than the exposure amount of the third exposure image; at this time, the image features of the target pixels can be calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image respectively :
A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
X+Y+Z=1X+Y+Z=1
其中,X表示所述目标像素点的图像特征来自于第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于第三曝光图像的权重,A表示所述目标像素点在第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在第二曝光图像中的对应像素点的图像特征,C表示所述目标像素点在第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在第一高动态范围图像中的对应像素点的图像特征。Wherein, X represents that the image feature of the target pixel point comes from the weight of the first exposure image, Y represents that the image feature of the target pixel point comes from the weight of the second exposure image, and Z represents the image feature of the target pixel point Weights from the third exposure image, A represents the image feature of the corresponding pixel of the target pixel in the first exposure image, B represents the image feature of the corresponding pixel of the target pixel in the second exposure image , C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the corresponding pixel of the target pixel in the first high dynamic range image.
在上述公式中,A、B、C和P都是已知值,但由于具有三个未知量X、Y和Z,故还需要引入额外的限定条件,方可计算得到X、Y和Z的数值。具体的,可以根据目标像素点在第一高动态范围图像中的对应像素点的图像特征分别来自于各个第一图像的权重,设置目标像素点的图像特征分别来自于第一曝光图像、第二曝光图像和第 三曝光图像的权重中的任意一个权重为设定值。In the above formula, A, B, C and P are all known values, but since there are three unknown quantities X, Y and Z, it is necessary to introduce additional restrictive conditions to calculate the values of X, Y and Z value. Specifically, according to the weights that the image features of the corresponding pixel points of the target pixel points in the first high dynamic range image come from the respective first images, the image features of the target pixel points come from the first exposure image, the second exposure image, and the second exposure image respectively. Any one of the weights of the exposure image and the third exposure image is a set value.
例如,假设各个第一图像包括曝光量依次减小的第四曝光图像、第五曝光图像和第六曝光图像,目标像素点在第四曝光图像中的对应像素点的图像特征为100(融合权重为50%),目标像素点在第五曝光图像中的对应像素点的图像特征为50(融合权重为20%),目标像素点在第五曝光图像中的对应像素点的图像特征为10(融合权重为30%),则在融合后,得到的第一高动态范围图像中对应像素点的图像特征为100*50%+50*20%+10*30%=63。再假设目标像素点在第一曝光图像中的对应像素点的图像特征为120,目标像素点在第二曝光图像中的对应像素点的图像特征为60,目标像素点在第三曝光图像中的对应像素点的图像特征为5,则根据上述公式可以得到:For example, assuming that each first image includes a fourth exposure image, a fifth exposure image, and a sixth exposure image with sequentially decreasing exposure amounts, the image feature of the corresponding pixel of the target pixel in the fourth exposure image is 100 (fusion weight 50%), the image feature of the corresponding pixel of the target pixel in the fifth exposure image is 50 (fusion weight is 20%), and the image feature of the corresponding pixel of the target pixel in the fifth exposure image is 10 ( The fusion weight is 30%), then after fusion, the obtained image feature of the corresponding pixel in the first high dynamic range image is 100*50%+50*20%+10*30%=63. Assume again that the image feature of the corresponding pixel of the target pixel in the first exposure image is 120, the image feature of the corresponding pixel of the target pixel in the second exposure image is 60, and the image feature of the target pixel in the third exposure image The image feature corresponding to the pixel is 5, then according to the above formula, it can be obtained:
120*X+60*Y+5*Z=63120*X+60*Y+5*Z=63
X+Y+Z=1X+Y+Z=1
此时可以设置X、Y和Z中的任意一个权重为设定值,例如,由于目标像素点在第四曝光图像中对应的融合权重最大(50%),故可以设置目标像素点在第一曝光图像(第一曝光图像和第四曝光图像对应,都是曝光量最大的图像)中对应的融合权重X为50%,那么就可以计算得到其它的两个权重Y和Z的数值了。需要说明的是,这种计算方式针对的图像特征可以是RGB域的任意一个通道的特征,或者是YUV域中Y通道(亮度)的特征。至于YUV域中的UV通道(色度)的特征,可以直接采用曝光量居中的图像的特征,例如,在上述例子中,目标像素点UV通道的特征可以采用第二曝光图像中对应的UV值,作为融合后的第二高动态范围图像中该目标像素点的UV值。At this time, any weight of X, Y, and Z can be set as a set value. For example, since the fusion weight corresponding to the target pixel point in the fourth exposure image is the largest (50%), it can be set that the target pixel point is in the first exposure image. The corresponding fusion weight X in the exposure image (the first exposure image corresponds to the fourth exposure image, both of which are images with the largest exposure) is 50%, then the values of the other two weights Y and Z can be calculated. It should be noted that the image features targeted by this calculation method can be the features of any channel in the RGB domain, or the features of the Y channel (brightness) in the YUV domain. As for the characteristics of the UV channel (chroma) in the YUV domain, the characteristics of the image whose exposure is centered can be directly used. For example, in the above example, the characteristics of the UV channel of the target pixel point can use the corresponding UV value in the second exposure image , as the UV value of the target pixel in the fused second high dynamic range image.
以上所述,为根据第一高动态范围图像进行HDR效果引导的具体实施方式。通过这样设置,可以使得获得的第二高动态范围图像具有和第一高动态范围图像相同或相近的亮度等图像特征,以满足指定的图像融合场景的需要。The above is a specific implementation manner of HDR effect guidance based on the first high dynamic range image. With this setting, the obtained second high dynamic range image can have the same or similar image characteristics as the first high dynamic range image, such as brightness, so as to meet the requirements of the specified image fusion scene.
203、电子设备对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。203. The electronic device fuses the first high dynamic range image and the second high dynamic range image to obtain a fused image.
在获得第一高动态范围图像和第二高动态范围图像之后,可以对两幅高动态范围图像进行融合,具体是将第二高动态范围图像融合到第一高动态范围图像中包含的与第二高动态范围图像对应的区域,从而得到融合后的图像。该高动态范围图像融合的过程主要包含图像配准和图像特征叠加,具体可参照现有技术中关于图像融合的相关内容,在此不再赘述。After the first high dynamic range image and the second high dynamic range image are obtained, the two high dynamic range images can be fused, specifically, the second high dynamic range image is fused to the first high dynamic range image and the first high dynamic range image. Regions corresponding to the two high dynamic range images, so as to obtain the fused image. The high dynamic range image fusion process mainly includes image registration and image feature superposition. For details, reference may be made to related content on image fusion in the prior art, which will not be repeated here.
在本申请实施例中,首先,获取至少两帧不同曝光量的大视场角图像以及至少两帧不同曝光量的小视场角图像;然后,分别对大视场角图像以及小视场角图像进行高动态范围融合处理,获得大视场角图像对应的第一高动态范围图像以及小视场角图像对应的第二高动态范围图像;最后,将第一高动态范围图像和第二高动态范围图像融合,获得融合后的图像。上述过程通过对小视场角图像进行高动态范围融合处理,能够恢复小视场角图像出现过曝而损失的部分图像细节,从而提升融合后图像的清晰度。In the embodiment of the present application, at first, at least two frames of images with a large field of view with different exposures and at least two frames of images with a small field of view with different exposures are obtained; High dynamic range fusion processing, obtaining the first high dynamic range image corresponding to the large field of view image and the second high dynamic range image corresponding to the small field of view image; finally, combining the first high dynamic range image and the second high dynamic range image Fusion to obtain the fused image. The above process can restore some image details lost due to overexposure of the small field of view image by performing high dynamic range fusion processing on the small field of view image, thereby improving the clarity of the fused image.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the sequence numbers of the steps in the above embodiments do not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
图3是本申请实施例采用的两个不同视场角摄像头的拍摄范围示意图。其中,视场角较大的摄像头是第一摄像头,视场角较小的摄像头是第二摄像头,第一摄像头和第二摄像头处于一个电子设备的同一个面。图3中通过两个矩形框分别示出两个摄像头的拍摄范围,显然第一摄像头的拍摄范围包含第二摄像头的拍摄范围,故第一摄像头拍摄到的图像(第一图像)包含第二摄像头拍摄到的图像(第二图像)。在实际应用中,第一摄像头可以是广角摄像头、普通摄像头等视场角较大的摄像头,第二摄像头可以是长焦摄像头等视场角较小的摄像头,然而应当理解,两个不同视场角的任何类型的摄像头均可作为本申请实施例的第一摄像头和第二摄像头。FIG. 3 is a schematic diagram of shooting ranges of two cameras with different field of view angles adopted in the embodiment of the present application. Wherein, the camera with a larger field of view is the first camera, and the camera with a smaller field of view is the second camera, and the first camera and the second camera are located on the same surface of an electronic device. The shooting ranges of the two cameras are shown respectively by two rectangular boxes in Fig. 3, obviously the shooting range of the first camera includes the shooting range of the second camera, so the image (first image) captured by the first camera includes the shooting range of the second camera The captured image (second image). In practical applications, the first camera can be a camera with a larger field of view such as a wide-angle camera or an ordinary camera, and the second camera can be a camera with a smaller field of view such as a telephoto camera. However, it should be understood that two different fields of view Any type of camera at any angle can be used as the first camera and the second camera in the embodiment of the present application.
图4是本申请实施例提供的一幅大视场角图像和对应的小视场角图像的示意图。在图4中,左方图像为大视场角图像,即前文所述的第一摄像头拍摄到的一幅第一图像;右方图像为小视场角图像,即前文所述的第二摄像头拍摄到的一幅第二图像。在现有技术中,为了提高获得的大视场角图像中远景区域的清晰度,通常会将小视场角图像融合到大视场角图像中。然而,由于逆光等原因,小视场角图像的部分区域可能出现过度曝光(如图4中标示出的过曝区域),导致图像细节丢失,这会影响融合后图像的清晰度。Fig. 4 is a schematic diagram of an image with a large field of view and a corresponding image with a small field of view provided by an embodiment of the present application. In Figure 4, the image on the left is an image with a large field of view, that is, a first image captured by the first camera mentioned above; the image on the right is an image with a small field of view, that is, the image captured by the second camera mentioned above A second image is obtained. In the prior art, in order to improve the clarity of the distant scene area in the obtained image with a large field of view, the image with a small field of view is usually fused into the image with a large field of view. However, due to backlighting and other reasons, some areas of the image with a small field of view may be overexposed (such as the overexposed area marked in Figure 4), resulting in loss of image details, which will affect the clarity of the fused image.
针对上述问题,本申请实施例提出一种图像融合方法,其具体的操作原理示意图如图5所示。在图5中,用户点击拍照后,主路(即主摄像头所在的拍摄通道,该主摄像头通常可以为视场角较大的普通摄像头)会根据主摄像头的预览流设置不同曝光量的参数,拍摄得到主路长曝光帧和主路短曝光帧,其中主路长曝光帧的曝光量大于主路短曝光帧的曝光量;然后,对主路长曝光帧和主路短曝光帧执行HDR融合操作,得到主路高动态范围图像。与主路类似,辅路(即辅助摄像头所在的拍摄通道,该辅助摄像头通常可以为视场角较小的长焦摄像头)会根据辅助摄像头的预览流设置不同曝光量的参数,拍摄得到辅路长曝光帧和辅路短曝光帧,其中辅路长曝光帧的曝光量大于辅路短曝光帧的曝光量;然后,对辅路长曝光帧和辅路短曝光帧执行HDR融合操作,得到辅路高动态范围图像。最后,将主路高动态范围图像和辅路高动态范围图像进行融合,得到融合后图像。In view of the above problems, the embodiment of the present application proposes an image fusion method, and a schematic diagram of its specific operation principle is shown in FIG. 5 . In Figure 5, after the user clicks to take a photo, the main road (that is, the shooting channel where the main camera is located, which can usually be an ordinary camera with a large field of view) will set different exposure parameters according to the preview stream of the main camera. The main road long exposure frame and the main road short exposure frame are obtained by shooting, and the exposure of the main road long exposure frame is greater than the exposure of the main road short exposure frame; then, perform HDR fusion on the main road long exposure frame and the main road short exposure frame Operation to obtain the high dynamic range image of the main road. Similar to the main road, the auxiliary road (that is, the shooting channel where the auxiliary camera is located, which can usually be a telephoto camera with a small field of view) will set different exposure parameters according to the preview stream of the auxiliary camera, and the long exposure of the auxiliary road will be obtained by shooting Frames and short-exposure frames of the auxiliary road, wherein the exposure of the long-exposure frame of the auxiliary road is greater than the exposure of the short-exposure frame of the auxiliary road; then, an HDR fusion operation is performed on the long-exposure frame of the auxiliary road and the short-exposure frame of the auxiliary road to obtain a high dynamic range image of the auxiliary road. Finally, the high dynamic range image of the main road and the high dynamic range image of the auxiliary road are fused to obtain the fused image.
进一步的,在图5所示的图像融合方法中,可以根据该主路高动态范围图像的图像特征对辅路长曝光帧和辅路短曝光帧的HDR融合过程进行引导(如图5中的虚线所示),使得获得的辅路高动态范围图像具有和主路高动态范围图像相同或相近的亮度等图像特征。也即,主路可以对辅路进行HDR效果的引导,其具体的引导方式可以参照前文的相关内容。Further, in the image fusion method shown in FIG. 5 , the HDR fusion process of the long-exposure frame of the auxiliary road and the short-exposure frame of the auxiliary road can be guided according to the image characteristics of the high dynamic range image of the main road (as indicated by the dotted line in FIG. 5 ). shown), so that the obtained auxiliary road high dynamic range image has the same or similar brightness and other image characteristics as the main road high dynamic range image. That is to say, the main road can guide the auxiliary road with HDR effects, and the specific guidance method can refer to the related content above.
为便于说明图5所示的图像融合方法的处理效果,引入实际拍摄的图像,如图6所示,为图5中主路长曝光帧、主路短曝光帧和主路高动态范围图像的效果示意图。在图6中,主路长曝光帧和主路短曝光帧为针对同一阳台场景的不同曝光量的图像,其中主路长曝光帧是曝光量较大的图像,可见其图像整体亮度较亮;主路短曝光帧是曝光量较小的图像,可见其图像整体亮度较暗。对主路长曝光帧和主路短曝光帧进行HDR融合后,得到主路高动态范围图像,可见该图像的亮度适中,一定程度上恢复了主路长曝光帧和/或主路短曝光帧中丢失的图像细节。In order to facilitate the description of the processing effect of the image fusion method shown in Figure 5, the actual captured image is introduced, as shown in Figure 6, which is the long exposure frame of the main road, the short exposure frame of the main road and the high dynamic range image of the main road in Figure 5 Effect diagram. In Figure 6, the long exposure frame of the main road and the short exposure frame of the main road are images of different exposures for the same balcony scene, and the long exposure frame of the main road is an image with a larger exposure, which shows that the overall brightness of the image is brighter; The short-exposure frame of the main road is an image with a small exposure, and it can be seen that the overall brightness of the image is relatively dark. After HDR fusion of the long-exposure frame of the main road and the short-exposure frame of the main road, the high dynamic range image of the main road is obtained. It can be seen that the brightness of the image is moderate, and the long-exposure frame of the main road and/or the short-exposure frame of the main road are restored to a certain extent missing image details.
如图7所示,为图5中辅路长曝光帧、辅路短曝光帧和辅路高动态范围图像的效 果示意图。在图7中,辅路长曝光帧和辅路短曝光帧为针对同一阳台场景(与图6相同的场景)的不同曝光量的图像,其中辅路长曝光帧是曝光量较大的图像,可见其图像整体亮度较亮;辅路短曝光帧是曝光量较小的图像,可见其图像整体亮度较暗。由于辅路摄像头的视场角小于主路摄像头的视场角,故图6所示的主路长曝光帧包含图7所示的辅路长曝光帧,图6所示的主路短曝光帧包含图7所示的辅路短曝光帧。对辅路长曝光帧和辅路短曝光帧进行HDR融合后,得到辅路高动态范围图像,可见该图像的亮度适中,一定程度上恢复了辅路长曝光帧和/或辅路短曝光帧中丢失的图像细节。另外,通过主路对辅路进行HDR效果的引导,得到的图7中的辅路高动态范围图像具有和图6中的主路高动态范围图像相同或相近的亮度等图像特征。As shown in Figure 7, it is a schematic diagram of the effect of the long exposure frame of the auxiliary road, the short exposure frame of the auxiliary road and the high dynamic range image of the auxiliary road in Figure 5. In Figure 7, the long exposure frame of the auxiliary road and the short exposure frame of the auxiliary road are images of different exposures for the same balcony scene (the same scene as in Figure 6), and the long exposure frame of the auxiliary road is an image with a larger exposure, which can be seen The overall brightness is brighter; the short-exposure frame of the auxiliary path is an image with a small exposure, so the overall brightness of the image is relatively dark. Since the field of view of the camera on the auxiliary road is smaller than that of the camera on the main road, the long exposure frame of the main road shown in Figure 6 contains the long exposure frame of the auxiliary road shown in Figure 7, and the short exposure frame of the main road shown in Figure 6 contains 7 shows the short-exposure frame of the auxiliary path. After HDR fusion of the long exposure frame and the short exposure frame of the auxiliary road, the high dynamic range image of the auxiliary road is obtained. It can be seen that the brightness of the image is moderate, and the image details lost in the long exposure frame of the auxiliary road and/or the short exposure frame of the auxiliary road are restored to a certain extent . In addition, through the HDR effect guidance of the main road to the auxiliary road, the obtained high dynamic range image of the auxiliary road in Figure 7 has image characteristics such as brightness that are the same or similar to the high dynamic range image of the main road in Figure 6 .
如图8所示,为将图6中的主路高动态范围图像和图7中的辅路高动态范围图像融合的效果示意图。由于主路高动态范围图像和辅路高动态范围图像具有相同或相近的图像特征,故可以降低图像融合对亮度和颜色等图像特征的影响,使获得的融合后图像具有和主路图像相同或相近的图像特征。图8的右方为融合后图像的示意图,其中虚线框内的区域为图像融合的目标区域,可见该目标区域获得了一定程度的清晰度提升效果。As shown in FIG. 8 , it is a schematic diagram of the effect of fusing the high dynamic range image of the main road in FIG. 6 and the high dynamic range image of the auxiliary road in FIG. 7 . Since the high dynamic range image of the main road and the high dynamic range image of the auxiliary road have the same or similar image characteristics, the influence of image fusion on image characteristics such as brightness and color can be reduced, so that the obtained fused image has the same or similar image characteristics as the main road image image features. The right side of Fig. 8 is a schematic diagram of the fused image, in which the area within the dotted frame is the target area of image fusion, and it can be seen that the target area has obtained a certain degree of sharpness improvement effect.
对应于上文实施例所述的图像融合方法,图9示出了本申请实施例提供的一种图像融合装置的结构框图。Corresponding to the image fusion method described in the above embodiments, FIG. 9 shows a structural block diagram of an image fusion device provided in the embodiment of the present application.
参照图9,该装置包括:Referring to Figure 9, the device includes:
图像获取模块901,用于获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,其中,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第一图像包含所述第二图像;An image acquisition module 901, configured to acquire at least two frames of first images with different exposures captured by the first camera, and acquire at least two frames of second images with different exposures captured by the second camera, wherein the first The angle of view of the camera is larger than the angle of view of the second camera, and the first image includes the second image;
高动态范围处理模块902,用于将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;A high dynamic range processing module 902, configured to fuse the at least two frames of first images with different exposures into a first high dynamic range image, and fuse the at least two frames of second images with different exposures into a second high dynamic range image dynamic range images;
图像融合模块903,用于对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。The image fusion module 903 is configured to fuse the first high dynamic range image and the second high dynamic range image to obtain a fused image.
在本申请的一个实施例中,所述高动态范围处理模块可以包括:In one embodiment of the present application, the high dynamic range processing module may include:
融合权重计算单元,用于根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重;A fusion weight calculation unit, configured to calculate, according to the image features of each pixel in the first high dynamic range image, that the image features of each pixel in the second high dynamic range image come from each of the first high dynamic range images. The weight of the second image;
高动态范围融合单元,用于根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。A high dynamic range fusion unit, configured to combine the at least two frames of second images with different exposures according to the weights of the image features of each pixel in the second high dynamic range image from each of the second images fused into the second high dynamic range image.
在本申请的一个实施例中,所述融合权重计算单元可以包括:In one embodiment of the present application, the fusion weight calculation unit may include:
第一融合权重计算子单元,用于根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;The first fusion weight calculation subunit is used to calculate and obtain the first high dynamic range image according to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image. The image features of each pixel in a high dynamic range image respectively come from the weights of each of the first images;
第二融合权重计算子单元,用于根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每 个像素点的图像特征分别来自于各个所述第二图像的权重。The second fusion weight calculation subunit is used to calculate and obtain the weights in the second high dynamic range image according to the image features of each pixel in the first high dynamic range image respectively from the weights of the first images. The image features of each pixel point come from the weights of the second images respectively.
进一步的,所述第二融合权重计算子单元具体可以用于:针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。Further, the second fusion weight calculation subunit can be specifically configured to: for any target pixel in the second high dynamic range image, combine the target pixel in the first high dynamic range image The image features of the corresponding pixel points come from the weights of each of the first images, and the image features determined as the target pixel points come from the weights of each of the second images.
在本申请的一个实施例中,所述融合权重计算单元可以包括:In one embodiment of the present application, the fusion weight calculation unit may include:
第三融合权重计算子单元,用于针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。The third fusion weight calculation subunit is configured to, for any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the calculated image features of the target pixel are respectively derived from the weights of each of the second images, wherein the target pixel The image feature of the point is equal to the image feature of the corresponding pixel point of the target pixel point in the first high dynamic range image.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像和第二曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量;所述第三融合权重计算子单元具体可以用于:根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像和所述第二曝光图像的权重:Further, the at least two frames of second images with different exposures include a first exposure image and a second exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image; the third The fusion weight calculation subunit can be specifically configured to: calculate the weights of the image features of the target pixel from the first exposure image and the second exposure image respectively according to the following formula:
A*X+B*Y=PA*X+B*Y=P
X+Y=1X+Y=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel The image feature of the corresponding pixel point in the first exposure image, B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image, and P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
进一步的,所述至少两帧不同曝光量的第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量,所述第二曝光图像的曝光量大于所述第三曝光图像的曝光量;所述第三融合权重计算子单元具体可以包括:Further, the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, the exposure of the first exposure image is greater than the exposure of the second exposure image , the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image; the third fusion weight calculation subunit may specifically include:
权重设定子单元,用于根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,设置所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重中的任意一个权重为设定值;A weight setting subunit, configured to set the weight of the target pixel according to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image from each of the first images. Image features are respectively from any one of the weights of the first exposure image, the second exposure image and the third exposure image as a set value;
公式计算子单元,用于根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重:The formula calculation subunit is used to calculate the weights of the image features of the target pixel from the first exposure image, the second exposure image and the third exposure image respectively according to the following formula:
A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
X+Y+Z=1X+Y+Z=1
其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于所述第三曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的 对应像素点的图像特征,C表示所述目标像素点在所述第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and Z indicates that the target pixel The image feature of the point comes from the weight of the third exposure image, A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image, and B represents the image feature of the target pixel point in the second exposure image. The image feature of the corresponding pixel in the exposure image, C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the target pixel in the first high dynamic range image The image features of the corresponding pixels in .
在本申请的一个实施例中,所述图像获取模块可以包括:In one embodiment of the present application, the image acquisition module may include:
第一图像拍摄单元,用于通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;The first image capturing unit is configured to capture more than one frame of images with an exposure greater than a first reference exposure through the first camera, and the first reference exposure is the exposure corresponding to the preview stream of the first camera ;
第二图像拍摄单元,用于通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;The second image capturing unit is configured to capture more than one frame of images with an exposure less than the first reference exposure through the first camera;
第一图像确定单元,用于将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像;A first image determining unit, configured to determine the image with an exposure amount of more than one frame greater than the first reference exposure amount and the image with an exposure amount of more than one frame less than the first reference exposure amount as the at least two frame the first image with different exposures;
第三图像拍摄单元,用于通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;The third image capturing unit is configured to obtain more than one frame of images with an exposure greater than a second reference exposure through the second camera, and the second reference exposure is the exposure corresponding to the preview stream of the second camera ;
第四图像拍摄单元,用于通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;A fourth image capturing unit, configured to capture more than one frame of images with an exposure less than the second reference exposure through the second camera;
第二图像确定单元,用于将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。The second image determining unit is configured to determine the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is less than the second reference exposure amount for more than one frame as the at least two Frame a second image with a different exposure.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本申请提出的各个图像融合方法。The embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, various image fusion methods as proposed in the present application are implemented.
本申请实施例还提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行本申请提出的各个图像融合方法。The embodiment of the present application also provides a computer program product, which, when the computer program product runs on the electronic device, causes the electronic device to execute each image fusion method proposed in the present application.
图10为本申请一实施例提供的电子设备的示意图。如图10所示,该实施例的电子设备100包括:至少一个处理器1000(图10中仅示出一个)处理器、存储器1001以及存储在所述存储器1001中并可在所述至少一个处理器1000上运行的计算机程序1002,所述处理器1000执行所述计算机程序1002时实现上述任意图像融合方法实施例中的步骤。Fig. 10 is a schematic diagram of an electronic device provided by an embodiment of the present application. As shown in FIG. 10 , the electronic device 100 of this embodiment includes: at least one processor 1000 (only one is shown in FIG. 10 ), a processor, a memory 1001, and stored in the memory 1001 and can be processed in the at least one processor. A computer program 1002 running on the processor 1000, when the processor 1000 executes the computer program 1002, implements the steps in any of the above embodiments of the image fusion method.
该电子设备可包括,但不仅限于,处理器1000、存储器1001。本领域技术人员可以理解,图10仅仅是电子设备100的举例,并不构成对电子设备100的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。The electronic device may include, but not limited to, a processor 1000 and a memory 1001 . Those skilled in the art can understand that FIG. 10 is only an example of the electronic device 100, and does not constitute a limitation to the electronic device 100. It may include more or less components than shown in the figure, or combine certain components, or different components. , for example, may also include input and output devices, network access devices, and so on.
所称处理器1000可以是中央处理单元(Central Processing Unit,CPU),该处理器1000还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 1000 may be a central processing unit (Central Processing Unit, CPU), and the processor 1000 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit) , ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
所述存储器1001在一些实施例中可以是所述电子设备100的内部存储单元,例如 电子设备100的硬盘或内存。所述存储器1001在另一些实施例中也可以是所述电子设备100的外部存储设备,例如所述电子设备100上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器1001还可以既包括所述电子设备100的内部存储单元也包括外部存储设备。所述存储器1001用于存储操作装置、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器1001还可以用于暂时地存储已经输出或者将要输出的数据。The memory 1001 may be an internal storage unit of the electronic device 100 in some embodiments, such as a hard disk or a memory of the electronic device 100. The memory 1001 may also be an external storage device of the electronic device 100 in other embodiments, such as a plug-in hard disk equipped on the electronic device 100, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc. Further, the memory 1001 may also include both an internal storage unit of the electronic device 100 and an external storage device. The memory 1001 is used to store operating devices, application programs, bootloader programs (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 1001 can also be used to temporarily store data that has been output or will be output.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述装置中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of description, only the division of the above-mentioned functional units and modules is used for illustration. In practical applications, the above-mentioned functions can be assigned to different functional units, Completion of modules means that the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit, and the above-mentioned integrated units may adopt hardware It can also be implemented in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present application. For the specific working process of the units and modules in the above device, reference may be made to the corresponding process in the foregoing method embodiments, and details are not repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the descriptions of each embodiment have their own emphases, and for parts that are not detailed or recorded in a certain embodiment, refer to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计 算机程序可存储于计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到电子设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, all or part of the processes in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in computer-readable storage media. When executed by a processor, the steps in the foregoing method embodiments can be realized. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form. The computer-readable medium may at least include: any entity or device capable of carrying computer program codes to electronic equipment, recording media, computer memory, read-only memory (ROM, Read-Only Memory), random-access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media. Such as U disk, mobile hard disk, magnetic disk or CD, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-described embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still implement the foregoing embodiments Modifications to the technical solutions described in the examples, or equivalent replacements for some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the application, and should be included in the Within the protection scope of this application.

Claims (18)

  1. 一种图像融合方法,其特征在于,包括:An image fusion method, characterized in that, comprising:
    获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,其中,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第一图像包含所述第二图像;Acquiring at least two frames of first images with different exposures captured by the first camera, and acquiring at least two frames of second images with different exposures captured by the second camera, wherein the angle of view of the first camera is larger than the The angle of view of the second camera, the first image includes the second image;
    将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;fusing the at least two frames of first images with different exposures into a first high dynamic range image, and fusing the at least two frames of second images with different exposures into a second high dynamic range image;
    对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。The first high dynamic range image and the second high dynamic range image are fused to obtain a fused image.
  2. 如权利要求1所述的图像融合方法,其特征在于,将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像,包括:The image fusion method according to claim 1, wherein said at least two frames of second images with different exposures are fused into a second high dynamic range image, comprising:
    根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重;According to the image features of each pixel in the first high dynamic range image, calculate the image features of each pixel in the second high dynamic range image from the weights of the second images;
    根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。According to the weights of the image features of each pixel in the second high dynamic range image from the respective second images, fusing the at least two frames of second images with different exposures into the second high dynamic range image range image.
  3. 如权利要求2所述的图像融合方法,其特征在于,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,包括:The image fusion method according to claim 2, wherein the image of each pixel in the second high dynamic range image is calculated according to the image features of each pixel in the first high dynamic range image The features come from the weights of each of the second images, including:
    根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;According to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, calculate and obtain each pixel in the first high dynamic range image The image features of each come from the weights of each of the first images;
    根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重。According to the weights that the image features of each pixel in the first high dynamic range image come from each of the first images, it is calculated that the image features of each pixel in the second high dynamic range image come from weights for each of the second images.
  4. 如权利要求3所述的图像融合方法,其特征在于,根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,包括:The image fusion method according to claim 3, characterized in that, according to the weights of the image features of each pixel in the first high dynamic range image from each of the first images, the second The image features of each pixel in the high dynamic range image are respectively derived from the weights of the second images, including:
    针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。For any target pixel in the second high dynamic range image, the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images The weights are determined as the weights of the image features of the target pixel points from the respective second images.
  5. 如权利要求2所述的图像融合方法,其特征在于,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,包括:The image fusion method according to claim 2, wherein the image of each pixel in the second high dynamic range image is calculated according to the image features of each pixel in the first high dynamic range image The features come from the weights of each of the second images, including:
    针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。For any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
  6. 如权利要求5所述的图像融合方法,其特征在于,所述至少两帧不同曝光量的第二图像包括第一曝光图像和第二曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量;The image fusion method according to claim 5, wherein the at least two frames of second images with different exposures include a first exposure image and a second exposure image, and the exposure of the first exposure image is greater than the the exposure amount of the second exposure image;
    根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,包括:According to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the target is calculated and obtained. The image features of the pixels are respectively derived from the weights of each of the second images, including:
    根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像和所述第二曝光图像的权重:The image features of the target pixel points are calculated according to the following formula from the weights of the first exposure image and the second exposure image:
    A*X+B*Y=PA*X+B*Y=P
    X+Y=1X+Y=1
    其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel The image feature of the corresponding pixel point in the first exposure image, B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image, and P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
  7. 如权利要求5所述的图像融合方法,其特征在于,所述至少两帧不同曝光量的第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量,所述第二曝光图像的曝光量大于所述第三曝光图像的曝光量;The image fusion method according to claim 5, wherein the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, and the first exposure image The exposure amount is greater than the exposure amount of the second exposure image, and the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image;
    根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,包括:According to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the target is calculated and obtained. The image features of the pixels are respectively derived from the weights of each of the second images, including:
    根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,设置所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重中的任意一个权重为设定值;According to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image respectively from the first images, set the image features of the target pixel to come from the first high dynamic range image respectively. Any one of the weights of an exposure image, the second exposure image and the third exposure image is a set value;
    根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重:The image features of the target pixel point are calculated according to the following formula from the weights of the first exposure image, the second exposure image and the third exposure image:
    A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
    X+Y+Z=1X+Y+Z=1
    其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于所述第三曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,C表示所述目标像素点在所述第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and Z indicates that the target pixel The image feature of the point comes from the weight of the third exposure image, A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image, and B represents the image feature of the target pixel point in the second exposure image. The image feature of the corresponding pixel in the exposure image, C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the target pixel in the first high dynamic range image The image features of the corresponding pixels in .
  8. 如权利要求1至7中任一项所述的图像融合方法,其特征在于,获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,包括:The image fusion method according to any one of claims 1 to 7, wherein obtaining at least two first images of different exposures captured by the first camera comprises:
    通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a first reference exposure by shooting with the first camera, where the first reference exposure is the exposure corresponding to the preview stream of the first camera;
    通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the first reference exposure by shooting with the first camera;
    将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像;Determining the images with the exposure of more than one frame greater than the first reference exposure and the images with the exposure of more than one frame less than the first reference exposure as the first images of the at least two frames with different exposures ;
    获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,包括:Obtaining at least two frames of second images with different exposures captured by the second camera, including:
    通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a second reference exposure through the second camera, where the second reference exposure is the exposure corresponding to the preview stream of the second camera;
    通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the second reference exposure by shooting with the second camera;
    将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。Determining the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is smaller than the second reference exposure amount for more than one frame as the second image of the at least two frames with different exposure amounts .
  9. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,所述电子设备实现如下图像融合方法:An electronic device, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein when the processor executes the computer program, the electronic device realizes The following image fusion method:
    获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,以及获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,其中,所述第一摄像头的视场角大于所述第二摄像头的视场角,所述第一图像包含所述第二图像;Acquiring at least two frames of first images with different exposures captured by the first camera, and acquiring at least two frames of second images with different exposures captured by the second camera, wherein the angle of view of the first camera is larger than the The angle of view of the second camera, the first image includes the second image;
    将所述至少两帧不同曝光量的第一图像融合为第一高动态范围图像,以及将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像;fusing the at least two frames of first images with different exposures into a first high dynamic range image, and fusing the at least two frames of second images with different exposures into a second high dynamic range image;
    对所述第一高动态范围图像和所述第二高动态范围图像进行融合,获得融合后的图像。The first high dynamic range image and the second high dynamic range image are fused to obtain a fused image.
  10. 如权利要求9所述的电子设备,其特征在于,将所述至少两帧不同曝光量的第二图像融合为第二高动态范围图像,包括:The electronic device according to claim 9, wherein merging the at least two frames of second images with different exposures into a second high dynamic range image comprises:
    根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重;According to the image features of each pixel in the first high dynamic range image, calculate the image features of each pixel in the second high dynamic range image from the weights of the second images;
    根据所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,将所述至少两帧不同曝光量的第二图像融合为所述第二高动态范围图像。According to the weights of the image features of each pixel in the second high dynamic range image from the respective second images, fusing the at least two frames of second images with different exposures into the second high dynamic range image range image.
  11. 如权利要求10所述的电子设备,其特征在于,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,包括:The electronic device according to claim 10, wherein the image feature of each pixel in the second high dynamic range image is calculated according to the image feature of each pixel in the first high dynamic range image The weights from each of the second images respectively include:
    根据所述第一高动态范围图像中每个像素点的图像特征以及所述第一高动态范围图像对应的图像高动态范围融合算法,计算得到所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重;According to the image features of each pixel in the first high dynamic range image and the image high dynamic range fusion algorithm corresponding to the first high dynamic range image, calculate and obtain each pixel in the first high dynamic range image The image features of each come from the weights of each of the first images;
    根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重。According to the weights that the image features of each pixel in the first high dynamic range image come from each of the first images, it is calculated that the image features of each pixel in the second high dynamic range image come from weights for each of the second images.
  12. 如权利要求11所述的电子设备,其特征在于,根据所述第一高动态范围图像中每个像素点的图像特征分别来自于各个所述第一图像的权重,计算得到所述第二高 动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,包括:The electronic device according to claim 11, wherein the second high dynamic range image is calculated according to the weights of the image features of each pixel in the first high dynamic range image from each of the first images. The image features of each pixel in the dynamic range image are respectively derived from the weights of the second images, including:
    针对所述第二高动态范围图像中任意的一个目标像素点,将所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,确定为所述目标像素点的图像特征分别来自于各个所述第二图像的权重。For any target pixel in the second high dynamic range image, the image features of the corresponding pixel of the target pixel in the first high dynamic range image are respectively obtained from each of the first images The weights are determined as the weights of the image features of the target pixel points from the respective second images.
  13. 如权利要求10所述的电子设备,其特征在于,根据所述第一高动态范围图像中每个像素点的图像特征,计算得到所述第二高动态范围图像中每个像素点的图像特征分别来自于各个所述第二图像的权重,包括:The electronic device according to claim 10, wherein the image feature of each pixel in the second high dynamic range image is calculated according to the image feature of each pixel in the first high dynamic range image The weights from each of the second images respectively include:
    针对所述第二高动态范围图像中任意的一个目标像素点,根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,其中,所述目标像素点的图像特征和所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征相等。For any target pixel in the second high dynamic range image, according to the image features of the corresponding pixel of the target pixel in the first high dynamic range image and the position of the target pixel in each of the The image features of the corresponding pixels in the second image are calculated to obtain the image features of the target pixels from the weights of each of the second images, wherein the image features of the target pixels and the target pixels Image features of corresponding pixel points in the first high dynamic range image are equal.
  14. 如权利要求13所述的电子设备,其特征在于,所述至少两帧不同曝光量的第二图像包括第一曝光图像和第二曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量;The electronic device according to claim 13, wherein the at least two frames of second images with different exposures include a first exposure image and a second exposure image, and the exposure of the first exposure image is greater than that of the first exposure 2. The exposure amount of the exposure image;
    根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,包括:According to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the target is calculated and obtained. The image features of the pixels are respectively derived from the weights of each of the second images, including:
    根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图像和所述第二曝光图像的权重:The image features of the target pixel points are calculated according to the following formula from the weights of the first exposure image and the second exposure image:
    A*X+B*Y=PA*X+B*Y=P
    X+Y=1X+Y=1
    其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and A indicates that the target pixel The image feature of the corresponding pixel point in the first exposure image, B represents the image feature of the corresponding pixel point of the target pixel point in the second exposure image, and P represents the image feature of the target pixel point in the Image features of corresponding pixels in the first high dynamic range image.
  15. 如权利要求13所述的电子设备,其特征在于,所述至少两帧不同曝光量的第二图像包括第一曝光图像、第二曝光图像和第三曝光图像,所述第一曝光图像的曝光量大于所述第二曝光图像的曝光量,所述第二曝光图像的曝光量大于所述第三曝光图像的曝光量;The electronic device according to claim 13, wherein the at least two frames of second images with different exposures include a first exposure image, a second exposure image and a third exposure image, and the exposure of the first exposure image The exposure amount is greater than the exposure amount of the second exposure image, and the exposure amount of the second exposure image is greater than the exposure amount of the third exposure image;
    根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征以及所述目标像素点在各个所述第二图像中的对应像素点的图像特征,计算得到所述目标像素点的图像特征分别来自于各个所述第二图像的权重,包括:According to the image features of the corresponding pixels of the target pixel in the first high dynamic range image and the image features of the corresponding pixels of the target pixel in each of the second images, the target is calculated and obtained. The image features of the pixels are respectively derived from the weights of each of the second images, including:
    根据所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征分别来自于各个所述第一图像的权重,设置所述目标像素点的图像特征分别来自于所述第一曝光图像、所述第二曝光图像和所述第三曝光图像的权重中的任意一个权重为设定值;According to the weights of the image features of the corresponding pixels of the target pixel in the first high dynamic range image respectively from the first images, set the image features of the target pixel to come from the first high dynamic range image respectively. Any one of the weights of an exposure image, the second exposure image and the third exposure image is a set value;
    根据以下公式计算得到所述目标像素点的图像特征分别来自于所述第一曝光图 像、所述第二曝光图像和所述第三曝光图像的权重:According to the following formula, the image features of the target pixels are calculated from the weights of the first exposure image, the second exposure image and the third exposure image:
    A*X+B*Y+C*Z=PA*X+B*Y+C*Z=P
    X+Y+Z=1X+Y+Z=1
    其中,X表示所述目标像素点的图像特征来自于所述第一曝光图像的权重,Y表示所述目标像素点的图像特征来自于所述第二曝光图像的权重,Z表示所述目标像素点的图像特征来自于所述第三曝光图像的权重,A表示所述目标像素点在所述第一曝光图像中的对应像素点的图像特征,B表示所述目标像素点在所述第二曝光图像中的对应像素点的图像特征,C表示所述目标像素点在所述第三曝光图像中的对应像素点的图像特征,P表示所述目标像素点在所述第一高动态范围图像中的对应像素点的图像特征。Wherein, X indicates that the image feature of the target pixel comes from the weight of the first exposure image, Y indicates that the image feature of the target pixel comes from the weight of the second exposure image, and Z indicates that the target pixel The image feature of the point comes from the weight of the third exposure image, A represents the image feature of the corresponding pixel point of the target pixel point in the first exposure image, and B represents the image feature of the target pixel point in the second exposure image. The image feature of the corresponding pixel in the exposure image, C represents the image feature of the corresponding pixel of the target pixel in the third exposure image, and P represents the image feature of the target pixel in the first high dynamic range image The image features of the corresponding pixels in .
  16. 如权利要求9至15中任一项所述的电子设备,其特征在于,获取第一摄像头拍摄到的至少两帧不同曝光量的第一图像,包括:The electronic device according to any one of claims 9 to 15, wherein acquiring at least two frames of first images with different exposures captured by the first camera comprises:
    通过所述第一摄像头拍摄得到一帧以上曝光量大于第一基准曝光量的图像,所述第一基准曝光量为所述第一摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a first reference exposure by shooting with the first camera, where the first reference exposure is the exposure corresponding to the preview stream of the first camera;
    通过所述第一摄像头拍摄得到一帧以上曝光量小于所述第一基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the first reference exposure by shooting with the first camera;
    将所述一帧以上曝光量大于所述第一基准曝光量的图像以及所述一帧以上曝光量小于所述第一基准曝光量的图像确定为所述至少两帧不同曝光量的第一图像;Determining the images with the exposure of more than one frame greater than the first reference exposure and the images with the exposure of more than one frame less than the first reference exposure as the first images of the at least two frames with different exposures ;
    获取第二摄像头拍摄到的至少两帧不同曝光量的第二图像,包括:Obtaining at least two frames of second images with different exposures captured by the second camera, including:
    通过所述第二摄像头拍摄得到一帧以上曝光量大于第二基准曝光量的图像,所述第二基准曝光量为所述第二摄像头的预览流对应的曝光量;Obtaining more than one frame of images with an exposure greater than a second reference exposure through the second camera, where the second reference exposure is the exposure corresponding to the preview stream of the second camera;
    通过所述第二摄像头拍摄得到一帧以上曝光量小于所述第二基准曝光量的图像;Obtaining more than one frame of images with an exposure less than the second reference exposure by shooting with the second camera;
    将所述一帧以上曝光量大于所述第二基准曝光量的图像以及所述一帧以上曝光量小于所述第二基准曝光量的图像确定为所述至少两帧不同曝光量的第二图像。Determining the image whose exposure amount is greater than the second reference exposure amount for more than one frame and the image whose exposure amount is smaller than the second reference exposure amount for more than one frame as the second image of the at least two frames with different exposure amounts .
  17. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被执行时实现如权利要求1至8任一项所述的图像融合方法。A computer-readable storage medium, the computer-readable storage medium stores a computer program, wherein when the computer program is executed, the image fusion method according to any one of claims 1 to 8 is realized.
  18. 一种计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1至8任一项所述的图像融合方法。A computer program product, characterized in that, when the computer program product is run on an electronic device, the electronic device is made to execute the image fusion method according to any one of claims 1 to 8.
PCT/CN2022/077713 2021-06-23 2022-02-24 Image fusion method, electronic device, storage medium, and computer program product WO2022267506A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110707247.7A CN115514876B (en) 2021-06-23 2021-06-23 Image fusion method, electronic device, storage medium and computer program product
CN202110707247.7 2021-06-23

Publications (1)

Publication Number Publication Date
WO2022267506A1 true WO2022267506A1 (en) 2022-12-29

Family

ID=84499590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077713 WO2022267506A1 (en) 2021-06-23 2022-02-24 Image fusion method, electronic device, storage medium, and computer program product

Country Status (2)

Country Link
CN (1) CN115514876B (en)
WO (1) WO2022267506A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116528058A (en) * 2023-05-26 2023-08-01 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363038A (en) * 2023-06-02 2023-06-30 深圳英美达医疗技术有限公司 Ultrasonic image fusion method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170034414A1 (en) * 2015-07-31 2017-02-02 Via Alliance Semiconductor Co., Ltd. Methods for generating hdr (high dynamic range) images and apparatuses using the same
CN109863742A (en) * 2017-01-25 2019-06-07 华为技术有限公司 Image processing method and terminal device
CN110062159A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment based on multiple image
CN112087580A (en) * 2019-06-14 2020-12-15 Oppo广东移动通信有限公司 Image acquisition method and device, electronic equipment and computer readable storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5188101B2 (en) * 2007-06-01 2013-04-24 株式会社キーエンス Magnification observation apparatus, magnified image photographing method, magnified image photographing program, and computer-readable recording medium
CN102457669B (en) * 2010-10-15 2014-04-16 华晶科技股份有限公司 Image processing method
CN102970549B (en) * 2012-09-20 2015-03-18 华为技术有限公司 Image processing method and image processing device
CN105933617B (en) * 2016-05-19 2018-08-21 中国人民解放军装备学院 A kind of high dynamic range images fusion method for overcoming dynamic problem to influence
CN106791377B (en) * 2016-11-29 2019-09-27 Oppo广东移动通信有限公司 Control method, control device and electronic device
US10623634B2 (en) * 2017-04-17 2020-04-14 Intel Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching
CN110276714B (en) * 2018-03-16 2023-06-06 虹软科技股份有限公司 Method and device for synthesizing rapid scanning panoramic image
CN111418201B (en) * 2018-03-27 2021-10-15 华为技术有限公司 Shooting method and equipment
CN109005342A (en) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 Panorama shooting method, device and imaging device
US11128809B2 (en) * 2019-02-15 2021-09-21 Samsung Electronics Co., Ltd. System and method for compositing high dynamic range images
CN110611750B (en) * 2019-10-31 2022-03-22 北京迈格威科技有限公司 Night scene high dynamic range image generation method and device and electronic equipment
CN110830697A (en) * 2019-11-27 2020-02-21 Oppo广东移动通信有限公司 Control method, electronic device, and storage medium
CN111917950B (en) * 2020-06-30 2022-07-22 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170034414A1 (en) * 2015-07-31 2017-02-02 Via Alliance Semiconductor Co., Ltd. Methods for generating hdr (high dynamic range) images and apparatuses using the same
CN109863742A (en) * 2017-01-25 2019-06-07 华为技术有限公司 Image processing method and terminal device
CN110062159A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment based on multiple image
CN112087580A (en) * 2019-06-14 2020-12-15 Oppo广东移动通信有限公司 Image acquisition method and device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116528058A (en) * 2023-05-26 2023-08-01 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction
CN116528058B (en) * 2023-05-26 2023-10-31 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction

Also Published As

Publication number Publication date
CN115514876A (en) 2022-12-23
CN115514876B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
US10827140B2 (en) Photographing method for terminal and terminal
CN109863742B (en) Image processing method and terminal device
US10810720B2 (en) Optical imaging method and apparatus
CN107302663B (en) Image brightness adjusting method, terminal and computer readable storage medium
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN107566752B (en) Shooting method, terminal and computer storage medium
WO2022267506A1 (en) Image fusion method, electronic device, storage medium, and computer program product
CN107948505B (en) Panoramic shooting method and mobile terminal
CN108419008B (en) Shooting method, terminal and computer readable storage medium
CN109639996B (en) High dynamic scene imaging method, mobile terminal and computer readable storage medium
CN107040723B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN106993136B (en) Mobile terminal and multi-camera-based image noise reduction method and device thereof
CN111064895B (en) Virtual shooting method and electronic equipment
CN110213484B (en) Photographing method, terminal equipment and computer readable storage medium
CN113179374A (en) Image processing method, mobile terminal and storage medium
CN111447371A (en) Automatic exposure control method, terminal and computer readable storage medium
CN111885307A (en) Depth-of-field shooting method and device and computer readable storage medium
CN113888452A (en) Image fusion method, electronic device, storage medium, and computer program product
CN112188082A (en) High dynamic range image shooting method, shooting device, terminal and storage medium
CN113179369A (en) Shot picture display method, mobile terminal and storage medium
CN110177207B (en) Backlight image shooting method, mobile terminal and computer readable storage medium
US11425355B2 (en) Depth image obtaining method, image capture device, and terminal
WO2021218551A1 (en) Photographing method and apparatus, terminal device, and storage medium
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN115134527B (en) Processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827013

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE