CN111028192B - Image synthesis method and electronic equipment - Google Patents

Image synthesis method and electronic equipment Download PDF

Info

Publication number
CN111028192B
CN111028192B CN201911309384.4A CN201911309384A CN111028192B CN 111028192 B CN111028192 B CN 111028192B CN 201911309384 A CN201911309384 A CN 201911309384A CN 111028192 B CN111028192 B CN 111028192B
Authority
CN
China
Prior art keywords
image
images
fusion
brightness
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911309384.4A
Other languages
Chinese (zh)
Other versions
CN111028192A (en
Inventor
卢东东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN201911309384.4A priority Critical patent/CN111028192B/en
Publication of CN111028192A publication Critical patent/CN111028192A/en
Application granted granted Critical
Publication of CN111028192B publication Critical patent/CN111028192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70

Abstract

The invention provides an image synthesis method and electronic equipment, and relates to the technical field of image processing. The image synthesis method comprises the following steps: acquiring at least three images, and sequencing the at least three images; sequentially selecting a first image and a second image from a first end of the ordering; comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, and aligning and fusing the reference image to the other image to obtain a first fused image; and obtaining a target fusion image according to the first fusion image and a third image in the at least three images. According to the scheme, the image synthesis is performed in a simultaneous alignment and simultaneous fusion mode, so that the fusion accuracy is guaranteed, the ghost image removal accuracy is improved, and the fusion effect is improved.

Description

Image synthesis method and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image synthesis method and an electronic device.
Background
With the continuous improvement of the performance of mobile phones and the popularization of mobile phone photographing, the requirements of people for photographing mobile phones are also higher and higher. However, due to the limitation of mobile phone hardware, it is difficult to cover all the brightness of a scene, such as a backlight scene, with a single photo, and a subject usually appears dark. Whereas the HDR (high dynamic) mode is to solve the problem of shooting in a high light ratio scene. The basic principle is that pictures with different brightness of the same scene are respectively shot under different exposure, so that details of dark parts can be seen in the brighter pictures, the contents of bright areas can be seen in the darker pictures, and the contents of the middle brightness areas are reserved by the pictures with normal brightness. And then fusing the photos into an image, and obtaining the HDR photo with more details through final tone adjustment.
According to the existing mobile phone shooting flow, photos with different exposures are usually shot frame by frame, so that when moving objects or hand shake exists in a shooting scene, a plurality of photos cannot be directly aligned, and the existing fusion ghost algorithm directly fuses images, so that ghost exists in a result image.
Disclosure of Invention
The embodiment of the invention provides an image synthesis method and electronic equipment, which are used for solving the problems that the existing fusion ghost algorithm can not ensure improvement of fusion ghosts, so that the fusion accuracy is low and the fusion effect is poor.
In order to solve the above technical problems, an embodiment of the present invention provides an image synthesis method, including:
acquiring at least three images, and sequencing the at least three images;
sequentially selecting a first image and a second image from a first end of the ordering;
comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, and aligning and fusing the reference image to the other image to obtain a first fused image;
and obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
The embodiment of the invention also provides electronic equipment, which comprises:
the sequencing module is used for acquiring at least three images and sequencing the at least three images;
a selection module for sequentially selecting a first image and a second image from the first end of the ordering;
the first fusion module is used for comparing the content of the first image with the content of the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, and aligning and fusing the reference image to the other image to obtain a first fusion image;
and the processing module is used for obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
The beneficial effects of the invention are as follows:
according to the scheme, the image synthesis is performed in a simultaneous alignment and simultaneous fusion mode, so that the fusion accuracy is guaranteed, the ghost image removal accuracy is improved, and the fusion effect is improved.
Drawings
FIG. 1 shows a schematic diagram of a conventional HDR fusion process;
FIG. 2 is a flow chart of an image synthesizing method according to an embodiment of the invention;
FIG. 3 is a detailed flow chart of an image synthesizing method according to an embodiment of the present invention;
FIG. 4 shows one of the block diagrams of the electronic device according to an embodiment of the invention;
FIG. 5 shows a second block diagram of an electronic device according to an embodiment of the invention;
FIG. 6 is a third schematic block diagram of an electronic device according to an embodiment of the invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In the prior art, as shown in fig. 1, a currently used fused ghost algorithm is to select a reference frame (EV 0) with proper brightness, which may be called a reference picture, then use a brighter picture (ev+1) and align it, if a motion area exists between two frames is detected, fill the information of EV0 onto ev+1, and similarly, de-align ev+2 with the aligned ev+1, and then align EV-1 and EV-2 with EV0 towards a dark frame.
However, the main drawbacks of this approach are:
if the ev+1 and the EV0 are aligned, an error occurs, and the abnormal information is filled in the ev+1, when the ev+2 and the ev+1 are aligned, the error area is necessarily detected to be motion, and according to the logic of the algorithm, the information of the previous fusion error is continuously filled in the current frame, so that the error is accumulated and transferred. That is, as long as there is one frame error in the alignment process, the result is necessarily error, and is amplified step by step.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 2, an embodiment of the present invention provides an image synthesis method, including:
step 201, at least three images are obtained, and the at least three images are ordered;
the at least three images may be a plurality of images having different exposure degrees and may be a plurality of images having the same exposure degrees and continuously photographed.
Step 202, sequentially selecting a first image and a second image from the first end of the sorting;
step 203, comparing the content of the first image and the second image, determining one image meeting a preset condition in the first image and the second image as a reference image, and aligning and fusing the reference image to the other image to obtain a first fused image;
and 204, obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
It should be noted that, in the embodiment of the invention, the fusion of multiple images is realized by adopting a mode of aligning and fusing at the same time, so that the fusion ghost can be effectively improved, and the fusion effect is improved.
It should be noted that, the embodiment of the invention can realize the fusion of different exposure degrees, improve the fusion ghost, and also can realize the fusion of images with the same exposure degree by using the mode, thereby realizing the noise reduction of multiple frames.
Further, in different processing scenarios, the specific implementation manner of step 201 is different, which may specifically be:
1. improving fusion ghosts
Specifically, in this case, the implementation manner of step 201 is:
and when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of the brightness values.
In this case, the first end of the sequence may be the end with high brightness or the end with low brightness, that is, the image processing may be performed one by one from the end with highest brightness of the sequence image, or the image processing may be performed one by one from the end with lowest brightness of the sequence image.
2. Implementing noise reduction for multiple frames
Specifically, in this case, the implementation manner of step 201 is:
and when the brightness of the at least three images is the same, sequencing the at least three images according to the order of the noise of the images.
In this case, the first end of the sequence may be the end with the greatest noise or the end with the least noise, that is, the image processing may be performed one by one from the end with the greatest noise of the sequence, or the image processing may be performed one by one from the end with the least noise of the sequence.
The following describes embodiments of the present invention mainly from the viewpoint of improving fusion ghosts.
Specifically, the implementation manner of step 203 is:
determining a motion area according to the first image and the second image;
and if no target object exists in the motion area of the first image and no target object exists in the motion area of the second image, determining the second image as a reference image, and filling the target object in the second image into the first image after the second image is aligned with the first image to obtain a first fusion image.
The above-mentioned motion area refers to a position area in which the same feature is present in the two images, and the feature is located in the two images, that is, a position difference area in the two images. In general, when image processing is performed, it is considered that noise of an image having a large brightness is weak, and the accuracy of alignment can be improved by using this image as a reference image, but if there is no target information in a moving region in a selected reference image, filling of another image with the reference image cannot be achieved, and in this case, it is necessary to replace the reference image, so that Gao Guanggui shadows can be avoided.
The specific implementation process is as follows: taking the image with the maximum brightness in the two images as a reference image, filling the information of the reference image into the motion area with the minimum brightness if the motion area of the reference image has information (characteristics), and fusing the images to obtain a first fused image; if no information is detected in the motion area of the reference image, taking the image with the minimum brightness in the two images as the reference image, filling the information of the reference image into the motion area of the other image, and fusing the images to obtain a first fused image.
It should be noted that, optionally, when the alignment is performed, if the noise of the image with the minimum brightness in the two images is the weakest, the image with the minimum brightness with the weakest noise is used as the reference image in the embodiment of the present invention, if the motion area of the reference image has information (features), the information of the reference image is filled into the motion area with the maximum brightness, and the image fusion is performed to obtain the first fusion image, but if the noise of the image with the maximum brightness in the two images is the weakest, the image with the maximum brightness is used as the reference image, and if the motion area of the reference image has information, the information of the reference image is filled into the motion area with the minimum brightness, and the image fusion is performed to obtain the first fusion image.
It should be further noted that, after the first alignment fusion process is performed to obtain the first fusion image, a second process is performed to obtain the second fusion image until a final target fusion image is obtained, and specifically, the implementation manner of step 204 may be:
sequentially selecting a third image after the second image from the first end;
comparing the content of the first fusion image with the content of the third image, taking a reference image used for acquiring the first fusion image as a reference image used for aligning the third image, and fusing the third image obtained after the alignment with the first fusion image to obtain a second fusion image;
and sequentially executing the alignment and fusion processes until the last image in the sequence is aligned and fused, so as to obtain the target fusion image.
In this case, the reference image used in the previous alignment is generally used as the reference image in the current alignment, but if the confidence of the previous alignment is low and the reference image needs to be adjusted, the reference image needs to be adjusted to be a third image, and the content of the motion area in the third image is filled into the motion area of the first fusion image, so as to obtain a final second fusion image; or, adjusting the reference image to be the first fusion image, and filling the content of the motion area in the first fusion image into the motion area of the third image to obtain a final second fusion image. The specific step of determining whether the reference image is the first fused image or the third image may be analogous to the process of fusing the first image and the second image, and will not be described herein.
And fusing the latest fused image and the next image selected in sequence according to the method, and finally obtaining the target fused image.
After the final target fusion image is obtained through multiple processing, since a common display only supports 8-bit data of each pixel, and multiple frames of images with different exposure 8 bits are usually larger than 10 bits, the pixels of the final target fusion image are required to be adjusted to 8 bits by using a tone adjustment algorithm (tone adjustment/DRC) under the condition that details and contrast are not lost as much as possible, and a final composite image is obtained, so that normal display of the image can be realized.
Specific implementations of embodiments of the present invention are described below.
Specifically, as shown in fig. 3, EV0 represents a normally exposed image, ev+x represents an image brighter than EV0, and the larger X is, the brighter the image, and the opposite EV-X represents an image darker than EV 0. The multi-exposure alignment fusion algorithm provided by the embodiment of the invention starts from the brightest image, aligns with the sub-bright image, detects a motion area, compensates the motion area, and fuses to obtain an HDR high-bitmap. And then carrying out the same alignment fusion on the obtained HDR high-bitmap and the image with the next brightness until all the images are fused. Finally, a final HDR output result is obtained by using a tone adjustment method (tone mapping) in an HDR algorithm.
It should be noted that, the double-headed arrow in fig. 3 indicates that the alignment process uses a non-fixed reference image (i.e., a reference frame), and for the alignment process of ev+2 and ev+1, the reference frame may be ev+2 on the left, or ev+1. In specific implementation, the left brighter image (EV+2) is defaulted as a reference frame, if the detected motion area reference frame is overexposed (without information), the motion area of the EV+1 frame cannot be filled with the information of the reference frame, the reference frame is changed into EV+1, and the motion area of EV+2 is filled with the information of EV+1. Since EV+1 is darker than EV+2, the overexposure probability of the motion area is far smaller than that of EV+2 as a reference frame, so that the problem of high-light ghost is solved greatly.
After the alignment of the EV+2 and the EV+1, the fusion of the EV+2 and the EV+1 is carried out to obtain an HDR high bitmap, then the obtained intermediate HDR high bitmap is aligned with the EV0, the reference frame (EV+2 or EV+1) of the last step can be used for alignment with the EV0, if the fact that the highlight movement exists or the confidence level of the previous step is low, the reference frame needs to be adjusted is judged, the movement area of the intermediate high bitmap is filled with information of the EV0, the reference frame is replaced with the EV0, otherwise, the movement area information of the EV0 is filled with the intermediate high bitmap along the previous step (EV+2 or EV+1), then the obtained next HDR high bitmap is processed with the EV-1 until all images are processed, the obtained final HDR high bitmap is subjected to tone adjustment, and the dynamic range is compressed, so that the final HDR result map is obtained.
It should be noted that, in the above embodiment, the manner of aligning and fusing is adopted, that is, when one frame of darker image is aligned, the HDR high bitmap obtained last time is fused with the frame of image, and because the manner of not fixing the reference frame is adopted, the error of the previous stage is removed in the process of replacing the reference frame fusion. By calculating the confidence level of the previous alignment and the condition of highlight movement, whether the current fusion needs to change the reference frame into a darker frame or not is selected, so that error transmission and gradual amplification can be avoided, and the processing accuracy is ensured.
It should be noted that, in the embodiment of the invention, the reference frame is not fixed in the alignment process, and the mode of simultaneous alignment and fusion is adopted, so that the fusion accuracy can be ensured, the ghost image removing accuracy is improved, the fusion effect is improved, and the use experience of a user is further improved.
As shown in fig. 4 to 6, an embodiment of the present invention further provides an electronic device 400, including:
a ranking module 401, configured to acquire at least three images, and rank the at least three images;
a selection module 402 for sequentially selecting a first image and a second image from a first end of the ordering;
a first fusion module 403, configured to compare contents of the first image and the second image, determine one image that meets a preset condition in the first image and the second image as a reference image, and align and fuse the reference image to another image, so as to obtain a first fused image;
and a processing module 404, configured to obtain a target fusion image according to the first fusion image and a third image in the at least three images.
Specifically, the ranking module 401 implements one of the following:
when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of brightness values;
and when the brightness of the at least three images is the same, sequencing the at least three images according to the order of the noise of the images.
Optionally, the first fusion module 403 includes:
a determining unit 4031 for determining a motion region from the first image and the second image;
and the first fusion unit 4032 is configured to determine the second image as a reference image if there is no target object in the motion area of the first image and there is a target object in the motion area of the second image, align the second image with the first image, and fill the target object in the second image into the first image to obtain a first fusion image.
Optionally, the first fusing module 403 is configured to:
and if the noise of the image with the minimum brightness in the first image and the second image is the weakest, taking the image with the minimum brightness as a reference image, otherwise taking the image with the maximum brightness as the reference image.
Optionally, the processing module 404 includes:
a selection unit 4041 for sequentially selecting a third image after the second image from the first end;
a second fusion unit 4042, configured to compare the content of the first fused image and the content of the third image, align the third image by using the reference image used for obtaining the first fused image as the reference image used for aligning the third image, and fuse the third image obtained after the alignment with the first fused image to obtain a second fused image;
and a processing unit 4043, configured to sequentially perform the alignment and fusion processes until the last image in the sequence is aligned and fused, so as to obtain the target fusion image.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of fig. 2, and in order to avoid repetition, a description is omitted here. The electronic equipment provided by the embodiment of the invention synthesizes the images in a mode of aligning and fusing at the same time, so that the fusion accuracy is ensured, the ghost image removing accuracy is improved, the fusion effect is improved, and the use experience of a user is improved.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present invention.
The electronic device 70 includes, but is not limited to: radio frequency unit 710, network module 720, audio output unit 730, input unit 740, sensor 750, display unit 760, user input unit 770, interface unit 780, memory 790, processor 711, and power supply 712. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 7 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In an embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Wherein the processor 711 is configured to acquire at least three images, and sequence the at least three images; sequentially selecting a first image and a second image from a first end of the ordering; comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, and aligning and fusing the reference image to the other image to obtain a first fused image; and obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
The electronic equipment provided by the embodiment of the invention synthesizes the images in a mode of aligning and fusing at the same time, so that the fusion accuracy is ensured, the ghost image removing accuracy is improved, the fusion effect is improved, and the use experience of a user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 710 may be configured to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the received downlink data with the processor 711; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 710 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 710 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 720, such as helping the user to send and receive e-mail, browse web pages, access streaming media, and the like.
The audio output unit 730 may convert audio data received by the radio frequency unit 710 or the network module 720 or stored in the memory 790 into an audio signal and output as sound. Also, the audio output unit 730 may also provide audio output (e.g., a call signal receiving sound, a message receiving sound, etc.) related to a particular function performed by the electronic device 70. The audio output unit 730 includes a speaker, a buzzer, a receiver, and the like.
The input unit 740 is used to receive an audio or video signal. The input unit 740 may include a graphics processor (Graphics Processing Unit, GPU) 741 and a microphone 742, the graphics processor 741 processing image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 760. The image frames processed by the graphics processor 741 may be stored in the memory 790 (or other storage medium) or transmitted via the radio frequency unit 710 or the network module 720. The microphone 742 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that may be transmitted to the mobile communication base station via the radio frequency unit 710 in case of a phone call mode.
The electronic device 70 also includes at least one sensor 750, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 761 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 761 and/or the backlight when the electronic device 70 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 750 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 760 serves to display information input by a user or information provided to the user. The display unit 760 may include a display panel 761, and the display panel 761 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 770 may be used to receive input numerical or character information and generate key signal inputs related to user settings and function control of the electronic device. In particular, the user input unit 770 includes a touch panel 771 and other input devices 772. Touch panel 771, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 771 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 771 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 711, and receives and executes commands sent from the processor 711. In addition, the touch panel 771 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 771, the user input unit 77 may also include other input devices 772. In particular, other input devices 772 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 771 may be overlaid on the display panel 761, and when the touch panel 771 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 711 to determine a type of touch event, and then the processor 711 provides a corresponding visual output on the display panel 761 according to the type of touch event. Although in fig. 7, the touch panel 771 and the display panel 761 are implemented as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 771 and the display panel 761 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 780 is an interface for connecting an external device to the electronic apparatus 70. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 780 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 70 or may be used to transmit data between the electronic apparatus 70 and an external device.
Memory 790 may be used to store software programs as well as various data. The memory 790 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 790 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 711 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 790 and calling data stored in the memory 790, thereby performing overall monitoring of the electronic device. The processor 711 may include one or more processing units; preferably, the processor 711 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 711.
The electronic device 70 may also include a power supply 712 (e.g., a battery) for powering the various components, and the power supply 712 may preferably be logically connected to the processor 711 by a power management system, such as to perform charge, discharge, and power consumption management functions via the power management system.
In addition, the electronic device 70 includes some functional modules, which are not shown, and will not be described in detail herein.
Preferably, the embodiment of the present invention further provides an electronic device, including a processor 711, a memory 790, and a computer program stored in the memory 790 and capable of running on the processor 711, where the computer program when executed by the processor 711 implements the respective processes of the embodiment of the image synthesis method, and the same technical effects are achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the image synthesis method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and changes can be made without departing from the principles of the present invention, and such modifications and changes are intended to be within the scope of the present invention.

Claims (4)

1. An image synthesizing method, characterized by comprising:
acquiring at least three images, and sequencing the at least three images;
sequentially selecting a first image, a second image and a third image from a first end of the ordering;
if no target object exists in the motion area of the first image and a target object exists in the motion area of the second image, the second image is used as a reference image, the second image is aligned with the first image, and then the target object in the second image is filled into the first image, so that a first fusion image is obtained;
taking the second image as a reference image, aligning the second image with the third image, and fusing the first fused image and the aligned third image to obtain a second fused image;
sequentially executing the alignment and fusion processes until the last image in the sequence is aligned and fused to obtain a target fusion image;
and judging whether to adjust the reference image according to the confidence coefficient of the last alignment.
2. The image synthesis method according to claim 1, wherein the ordering of the at least three images comprises one of:
when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of brightness values;
and when the brightness of the at least three images is the same, sequencing the at least three images according to the order of the noise of the images.
3. An electronic device, comprising:
the sequencing module is used for acquiring at least three images and sequencing the at least three images;
a selection module for sequentially selecting a first image, a second image, and a third image from the first end of the ordering;
the first fusion module is used for taking the second image as a reference image if no target object exists in the motion area of the first image and filling the target object in the second image into the first image after the second image is aligned with the first image, so as to obtain a first fusion image;
the processing module is used for taking the second image as a reference image, aligning the second image with the third image, and fusing the first fused image with the aligned third image to obtain a second fused image; sequentially executing the alignment and fusion processes until the last image in the sequence is aligned and fused to obtain a target fusion image;
and judging whether to adjust the reference image according to the confidence coefficient of the last alignment.
4. The electronic device of claim 3, wherein the ranking module implements one of:
when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of brightness values;
and when the brightness of the at least three images is the same, sequencing the at least three images according to the order of the noise of the images.
CN201911309384.4A 2019-12-18 2019-12-18 Image synthesis method and electronic equipment Active CN111028192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911309384.4A CN111028192B (en) 2019-12-18 2019-12-18 Image synthesis method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911309384.4A CN111028192B (en) 2019-12-18 2019-12-18 Image synthesis method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111028192A CN111028192A (en) 2020-04-17
CN111028192B true CN111028192B (en) 2023-08-08

Family

ID=70210354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911309384.4A Active CN111028192B (en) 2019-12-18 2019-12-18 Image synthesis method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111028192B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1657679A1 (en) * 2004-11-10 2006-05-17 Agfa-Gevaert Method of superimposing images
EP2075756A1 (en) * 2007-12-29 2009-07-01 Morpho Inc. Block-based image blending for camera shake compensation
CN102663768A (en) * 2012-05-10 2012-09-12 合肥工业大学 Method for adaptively determining optimal polarization orientation in polarization vision system
CN103530861A (en) * 2013-10-22 2014-01-22 天津普达软件技术有限公司 Method for splicing and fusing core images
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
CN105431883A (en) * 2013-07-08 2016-03-23 株式会社电装 Image generating device, image generating method, and non-transitory computer-readable storage medium
CN106204513A (en) * 2016-08-15 2016-12-07 厦门美图之家科技有限公司 The methods, devices and systems of image procossing
CN106612397A (en) * 2016-11-25 2017-05-03 努比亚技术有限公司 Image processing method and terminal
CN109863742A (en) * 2017-01-25 2019-06-07 华为技术有限公司 Image processing method and terminal device
CN110062160A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method and device
CN110072061A (en) * 2019-05-31 2019-07-30 努比亚技术有限公司 A kind of interactive mode image pickup method, mobile terminal and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1657679A1 (en) * 2004-11-10 2006-05-17 Agfa-Gevaert Method of superimposing images
EP2075756A1 (en) * 2007-12-29 2009-07-01 Morpho Inc. Block-based image blending for camera shake compensation
CN102663768A (en) * 2012-05-10 2012-09-12 合肥工业大学 Method for adaptively determining optimal polarization orientation in polarization vision system
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
CN105431883A (en) * 2013-07-08 2016-03-23 株式会社电装 Image generating device, image generating method, and non-transitory computer-readable storage medium
CN103530861A (en) * 2013-10-22 2014-01-22 天津普达软件技术有限公司 Method for splicing and fusing core images
CN106204513A (en) * 2016-08-15 2016-12-07 厦门美图之家科技有限公司 The methods, devices and systems of image procossing
CN106612397A (en) * 2016-11-25 2017-05-03 努比亚技术有限公司 Image processing method and terminal
CN109863742A (en) * 2017-01-25 2019-06-07 华为技术有限公司 Image processing method and terminal device
CN110062160A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method and device
CN110072061A (en) * 2019-05-31 2019-07-30 努比亚技术有限公司 A kind of interactive mode image pickup method, mobile terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
场景深度无关的虚拟孔径图像鬼影去除算法;肖照林;周果清;《西北工业大学学报》;第35卷(第1期);第154-159页 *

Also Published As

Publication number Publication date
CN111028192A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109218626B (en) Photographing method and terminal
CN110213484B (en) Photographing method, terminal equipment and computer readable storage medium
CN107623818B (en) Image exposure method and mobile terminal
CN108040209B (en) Shooting method and mobile terminal
CN108449541B (en) Panoramic image shooting method and mobile terminal
CN108924414B (en) Shooting method and terminal equipment
CN110602401A (en) Photographing method and terminal
CN108234894B (en) Exposure adjusting method and terminal equipment
CN107730460B (en) Image processing method and mobile terminal
CN109462745B (en) White balance processing method and mobile terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN108280817B (en) Image processing method and mobile terminal
CN109474784B (en) Preview image processing method and terminal equipment
CN111083386B (en) Image processing method and electronic device
CN112188082A (en) High dynamic range image shooting method, shooting device, terminal and storage medium
CN111447371A (en) Automatic exposure control method, terminal and computer readable storage medium
CN108616687B (en) Photographing method and device and mobile terminal
CN108307123B (en) Exposure adjusting method and mobile terminal
CN107896304B (en) Image shooting method and device and computer readable storage medium
CN110363729B (en) Image processing method, terminal equipment and computer readable storage medium
CN109167917B (en) Image processing method and terminal equipment
CN108848321B (en) Exposure optimization method, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant