CN109040524B - Artifact eliminating method and device, storage medium and terminal - Google Patents

Artifact eliminating method and device, storage medium and terminal Download PDF

Info

Publication number
CN109040524B
CN109040524B CN201810936334.8A CN201810936334A CN109040524B CN 109040524 B CN109040524 B CN 109040524B CN 201810936334 A CN201810936334 A CN 201810936334A CN 109040524 B CN109040524 B CN 109040524B
Authority
CN
China
Prior art keywords
exposure
short
long
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810936334.8A
Other languages
Chinese (zh)
Other versions
CN109040524A (en
Inventor
孙剑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810936334.8A priority Critical patent/CN109040524B/en
Publication of CN109040524A publication Critical patent/CN109040524A/en
Application granted granted Critical
Publication of CN109040524B publication Critical patent/CN109040524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Abstract

The embodiment of the application discloses an artifact eliminating method, an artifact eliminating device, a storage medium and a terminal, wherein the method comprises the following steps: firstly, when the acquisition of a long exposure image is detected, acquiring a long exposure image through a first camera within a long exposure time, and continuously acquiring at least two first short exposure images through a second camera; then, adjusting the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images; and finally, restoring the artifact areas in the long-exposure images according to the at least two second short-exposure images, and improving the resource utilization rate of the equipment.

Description

Artifact eliminating method and device, storage medium and terminal
Technical Field
The embodiment of the application relates to the technical field of mobile terminals, in particular to an artifact eliminating method, an artifact eliminating device, a storage medium and a terminal.
Background
With the continuous development of mobile terminals, almost every mobile terminal is configured with a camera function, and photographing can be performed based on the camera function. Mobile terminals tend to have an automated photographing process, which can automatically determine the exposure time according to the photographing environment.
However, in use, it is found that, as the exposure time increases, the movement of the subject during exposure may generate an artifact in the finally generated exposure image, which results in an unclear exposure image, and at this time, the user needs to take a picture again, otherwise, a satisfactory effect cannot be obtained, and the utilization rate of system resources is low.
Disclosure of Invention
The embodiment of the application aims to provide an artifact eliminating method, an artifact eliminating device, a storage medium and a terminal, which can improve the resource utilization rate of a mobile terminal.
In a first aspect, an embodiment of the present application provides an artifact removing method, including:
when the fact that a long exposure image is obtained is detected, obtaining a long exposure image through a first camera within a long exposure time, and continuously obtaining at least two first short exposure images through a second camera, wherein the exposure time of the long exposure image is longer than that of the first short exposure images;
adjusting the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images;
and repairing the artifact areas in the long-exposure images according to the at least two second short-exposure images.
In a second aspect, an embodiment of the present application provides an artifact removing apparatus, including:
the acquisition module is used for acquiring a long exposure image through the first camera within a long exposure time when the acquisition of the long exposure image is detected, and continuously acquiring at least two first short exposure images through the second camera, wherein the exposure time of the long exposure image is longer than that of the first end exposure image;
the adjusting module is used for adjusting the at least two first short-exposure images according to the brightness of the long-exposure image acquired by the acquiring module to obtain at least two second short-exposure images;
and the repairing module is used for repairing the artifact areas in the long-exposure images according to the at least two second short-exposure images obtained by the adjusting module.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the artifact removing method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the artifact removing method according to the first aspect.
According to the artifact eliminating scheme provided by the embodiment of the application, firstly, when the long exposure image is detected to be obtained, the first camera is used for obtaining the long exposure image within the long exposure time, and the second camera is used for continuously obtaining at least two first short exposure images; then, adjusting the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images; and finally, restoring the artifact areas in the long-exposure images according to the at least two second short-exposure images, and improving the resource utilization rate of the equipment.
Drawings
Fig. 1 is a schematic flowchart of an artifact removing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another artifact removing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another artifact removing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another artifact removing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another artifact removing method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another artifact removing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an artifact removing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical scheme of the application is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
With the continuous development of mobile terminals, almost every mobile terminal is configured with a camera function, and photographing can be performed based on the camera function. Mobile terminals tend to have an automated photographing process, which can automatically determine the exposure time according to the photographing environment. However, in use, it is found that, as the exposure time increases, the movement of the subject during exposure may generate an artifact in the finally generated exposure image, which results in an unclear exposure image, and at this time, the user needs to take a picture again, otherwise, a satisfactory effect cannot be obtained, and the utilization rate of system resources is low. Since any displacement of the subject during the long exposure will affect the final exposure image, it is required that not only the subject cannot be displaced but also the photographer cannot have excessive shake or movement when shooting the long exposure image, resulting in poor shooting effect of the long exposure image when the user takes a photograph by hand.
The embodiment of the application provides an artifact eliminating method, which can be used for a mobile terminal with two cameras to carry out long exposure by using one camera, continuously acquiring at least two first short exposure images by using the other camera during the long exposure period, adjusting the brightness of the first short exposure images to obtain second short exposure images, and repairing artifact areas in the long exposure images according to the at least two continuous second short exposure images, so that the reliability of the long exposure images is improved, repeated shooting is avoided, and the utilization rate of system resources is improved. Meanwhile, both the photographer and the photographed person do not need to be kept strictly still, and the usability of long-exposure photographing is improved. The specific scheme is as follows:
fig. 1 is a schematic flowchart of an artifact removing method according to an embodiment of the present application, where the method is used in a case where a mobile terminal with two cameras performs long-exposure shooting, and the method may be executed by the mobile terminal, where the mobile terminal may be a smart phone, a tablet computer, a wearable device, a notebook computer, or the like, and the method specifically includes the following steps:
and step 110, when the acquisition of the long exposure image is detected, acquiring a long exposure image through the first camera within the long exposure time, and continuously acquiring at least two first short exposure images through the second camera.
Wherein the exposure time of the long exposure image is greater than the exposure time of the first short exposure image. When the photographing instruction is triggered, the mobile terminal determines the exposure time according to the current environment brightness. The exposure time can be divided into long exposure and short exposure according to the time length, when the ambient brightness is higher, the time for the lens module to acquire the image is shorter, and the short exposure is used for shooting at the moment. When the ambient brightness is low, the time for the lens module to acquire the image is long, and at the moment, long exposure is used for shooting. In one embodiment, the long exposure time may be greater than 0.5 seconds, and exemplary long exposure times may be 1 second, 15 seconds, 30 seconds, 10 minutes, 1 hour, or at least two hours, etc. Accordingly, the short exposure time is less than the long exposure time, such as 0.2 seconds or 0.3 seconds. Alternatively, the long exposure time may be an integer multiple of the short exposure time. The embodiment of the application is mainly directed to a process of determining to acquire a long-exposure image through long exposure for a mobile terminal. Whether to acquire a long exposure image can be determined by detecting the exposure time. Optionally, the exposure time is obtained, and whether the exposure time is greater than the preset exposure time is detected. And if the exposure time is detected to be larger than the preset exposure time (such as 1 second), determining to acquire the long-exposure image. Optionally, whether the user triggers the long exposure instruction is detected, and if the user triggers the long exposure instruction, it is determined that the long exposure image is detected.
When a long exposure image is acquired, the start time of the long exposure is an exposure start time, and the end time of the long exposure is an exposure stop time. For a mobile terminal with double cameras, one camera can be used for long exposure, and the other camera is used for synchronously acquiring at least two first short exposure images during the period from the exposure starting time to the exposure stopping time. Optionally, at least two first short-exposure images are continuously acquired according to a preset time interval. The preset time interval may be a fixed value, such as 50 milliseconds. The preset time interval may be determined according to the long exposure time and the preset first short exposure image number. Illustratively, the long exposure time is divided by the preset number of first short exposure images to obtain the preset time interval. At this time, the preset time interval is also the exposure time of each first short-exposure image.
And step 120, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images.
Since the exposure time of the short-exposure image is short, a situation in which the luminance thereof is insufficient may occur. At this time, after the long-exposure image and the at least two first short-exposure images are acquired, the luminance of the first short-exposure image is adjusted to a luminance that matches the long-exposure image.
Further, the hardware distance between the first camera and the second camera is obtained.
And when the hardware distance is greater than a preset correction distance, correcting the long exposure image or the at least two short exposure images.
The arrangement positions of the two cameras on the mobile terminal are probably closer and probably far away from each other, and when the distance is far away, the position of an object in an image acquired by the two cameras has displacement deviation. At this time, the acquired images of the dual cameras can be corrected according to the hardware distance and the object distance. Optionally, the second camera is corrected to make the position of the object shot by the second camera consistent with the position of the object shot by the first camera in the image.
And step 130, repairing the artifact area in the long-exposure image according to the at least two second short-exposure images.
Acquiring the attitude information of the shot subject in at least two second short-exposure images, determining the attitude information of the shot subject in the artifact region according to the attitude information which appears most, and determining a clear image for filling the shot subject in the artifact region according to the attitude information. Then, the region except for the sharp image in the artifact region is repaired according to the second short-exposure image and the long-exposure image.
According to the artifact removing method provided by the embodiment of the application, firstly, when the long exposure image is detected to be obtained, the first camera is used for obtaining the long exposure image within the long exposure time, and the second camera is used for continuously obtaining at least two first short exposure images. And then, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images. And finally, restoring the artifact areas in the long-exposure images according to the at least two second short-exposure images. Compared with the situation that long exposure shooting is carried out only by using one camera, the embodiment of the application aims at the mobile terminal with double cameras, can use one camera to carry out long exposure, and use the other camera to continuously obtain at least two first short exposure images during the long exposure period, adjust the brightness of the first short exposure images to obtain second short exposure images, repair artifact areas in the long exposure images according to the at least two continuous second short exposure images, improve the reliability of the long exposure images, avoid repeated shooting and improve the utilization rate of system resources. Meanwhile, both the photographer and the photographed person do not need to be kept strictly still, and the usability of long-exposure photographing is improved. The multi-needle motion of the subject during the long exposure period can be restored by at least two continuous second short exposure images, and the artifact region can be more accurately restored based on the continuous second short exposure images.
In addition, the two cameras are used for simultaneously acquiring the long exposure image and the short exposure image, so that the short exposure image for repairing the artifact can be acquired at the same time, and the imaging between the two cameras is not interfered with each other, so that the long exposure image and the short exposure image (the first short exposure image and the second short exposure image) at the same time can be acquired on the mobile terminal with the two cameras, and the artifact can be repaired more efficiently.
Fig. 2 is a schematic flowchart of an artifact removing method according to an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
and step 210, when the acquisition of the long-exposure image is detected, determining the overlapping view range of the first camera and the second camera according to the angle information within the long exposure time.
The angle information is used for representing shooting ranges of the first camera and the second camera. The first camera and the second camera may be a wide-angle lens and a telephoto lens, respectively. Generally, the two have different shooting ranges. In the embodiment of the present application, at least two short-exposure images acquired by the second camera need to be used to repair the long-exposure image acquired by the first camera, so that the shooting ranges of the two short-exposure images need to be adjusted to be consistent. And adjusting the first camera and the second camera according to the focal length information selected by the user to enable the overlapping view range of the first camera and the second camera to reach the maximum range.
And step 220, in the overlapped view range, acquiring a long exposure image through the first camera, and continuously acquiring at least two first short exposure images through the second camera.
And step 230, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images.
And step 240, repairing the artifact area in the long-exposure image according to the at least two second short-exposure images.
The artifact removing method provided by the embodiment of the application can adjust the viewing ranges of the first camera and the second camera, realize that the first camera and the second camera have overlapping viewing ranges, repair the artifact in the overlapping viewing ranges, and improve the repair efficiency of long exposure.
Fig. 3 is a schematic flowchart of an artifact removing method according to an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
and 310, when the acquisition of the long exposure image is detected, acquiring a long exposure image through the first camera within the long exposure time, and continuously acquiring at least two first short exposure images through the second camera.
And step 320, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images.
And step 330, acquiring the weights of at least two second short-exposure images.
The weight of the second short-exposure image may be determined according to the shooting order, for example, the weight of the early period of the long exposure is greater than the weight of the late period of the long exposure.
And step 340, determining target weight and sharp image according to the weights of at least two second short-exposure images.
The target weight is the higher value of the weights of the at least two second short-exposure images, and the clear image is the image area corresponding to the artifact area in the second short-exposure image corresponding to the target weight. After the weights of the second short-exposure images are acquired, the weight of one or more second short-exposure images with the highest weight is determined as a target weight. And determining a clear image according to the second short-exposure image corresponding to the target weight.
And step 350, determining target position information according to the shot object in the second short-exposure image corresponding to the target weight.
And determining the position of the shooting object in the second short-exposure image corresponding to the target weight as target position information.
And step 360, covering the clear image to the target position information.
And determining an artifact area in the long-exposure images according to the displacement of the shot object in at least two continuous second short-exposure images. And determining a base image corresponding to the shot object according to the content of the second short-exposure image corresponding to the target weight, wherein the base image is used for replacing the artifact area in the long-exposure image. Then, the basic image can be optimized through a sharpening algorithm and a difference algorithm to obtain a clear image, and the clear image is filled in the artifact area.
And step 370, repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except for the clear image in the artifact area.
The uncovered artifact areas can be repaired according to the images in the same position area or the adjacent position area in other second short-exposure images and long-exposure images.
The artifact removing method provided by the embodiment of the application can determine the clear image based on the second short-exposure image corresponding to the target weight after the target weight is determined, repair the artifact area based on the clear image, and improve the repair efficiency of the long-exposure image.
Fig. 4 is a schematic flowchart of an artifact removing method according to an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
and step 410, when the acquisition of the long exposure image is detected, acquiring a long exposure image through the first camera within the long exposure time, and continuously acquiring at least two first short exposure images through the second camera.
And step 420, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images.
And step 430, acquiring the posture information of the subject in at least two second short-exposure images.
The subject in each second short-exposure image can be acquired by image analysis. The posture information is determined from the image area of the subject. The posture information can be represented by a pixel space occupied by a subject to be photographed.
And step 440, determining the weight of each second short-exposure image according to the posture information.
And scoring each posture information, and configuring a high weight for the second short-exposure image with higher posture score.
And step 450, determining target weight and clear image according to the weights of the at least two second short-exposure images.
And determining the second short-exposure image with the optimal posture score as the target weight. And repairing the shot main body area corresponding to the target weight by using other second short-exposure images to obtain a clear image. The scoring mechanism may be obtained by machine learning, by inputting a number of preferred photographing actions to determine the scores for different poses.
Step 460, determining the target location information according to the target weight.
And determining the position information of the shot subject corresponding to the target weight as target position information.
And step 470, covering the clear image to the target position information.
And step 480, repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except for the clear image in the artifact area.
The artifact removing method provided by the embodiment of the application can select the optimal posture appearing in the long exposure period according to the score of the posture, determine the clear image according to the optimal posture, repair the artifact area and improve the repair efficiency of the long exposure image.
Fig. 5 is a schematic flowchart of an artifact removing method according to an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
and step 510, when it is detected that a long exposure image is acquired, acquiring a long exposure image through the first camera within a long exposure time, and continuously acquiring at least two first short exposure images through the second camera.
And step 520, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images.
And step 530, obtaining the posture information of the subject in at least two second short-exposure images.
And step 540, counting the number of frames of the second short-exposure image with the same posture information.
And identifying the same type of body state information, and counting the number of the second short-exposure images with the same body state information to obtain the number of frames. And sequencing the number of frames corresponding to different body states.
And step 550, determining the weight of each second short-exposure image according to the frame number corresponding to each second short-exposure image.
And sequencing the number of frames corresponding to different body states.
And step 560, determining the target weight and the clear image according to the weights of the at least two second short-exposure images.
The larger the number of frames, the higher the weight of the body state information. And generating a clear image based on the posture information with the highest weight.
Step 570, determining the target position information according to the target weight.
The position of the subject in the second short-exposure image having the largest number of frames is determined as the target position information.
And step 580, covering the clear image to the target position information.
And step 590, repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except for the clear image in the artifact area.
The artifact eliminating method provided by the embodiment of the application can determine the clear image according to the posture information with the largest occurrence frequency in the long exposure period, repair the artifact area based on the clear image, and improve the repair efficiency of the long exposure image.
Fig. 6 is a schematic flowchart of an artifact removing method according to an embodiment of the present application, which is used to further describe the foregoing embodiment, and includes:
and step 610, when the acquisition of the long exposure image is detected, acquiring a long exposure image through the first camera within the long exposure time, and continuously acquiring at least two first short exposure images through the second camera.
And step 620, adjusting at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images.
Step 630, each second short-exposure image is scored separately.
And scoring the second short-exposure image according to the composition of the second short-exposure image.
And step 640, determining the weight of the second short-exposure image according to the grading result.
And selecting a plurality of second short-exposure images with higher scores of the scoring results, such as the scores of the first 40% with higher scores, and taking the scoring scores as the weights of the second short-exposure images.
And 650, determining target weight and sharp image according to the weights of the at least two second short-exposure images.
And determining the weight of the second short wave light image with the highest score as the target weight. And determining a sharp image from the subject in the second short-exposure image.
And step 660, determining target position information according to the target weight.
And determining the position of the shot subject in the second short-wave light image with the highest score as target position information.
Step 670, covering the clear image to the target position information.
And step 680, repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except for the clear image in the artifact area.
The artifact removing method provided by the embodiment of the application can determine the target weight based on the score of the second short-exposure image, determine the clear image based on the second short-exposure image corresponding to the target weight, repair the artifact area based on the clear image, and improve the repair efficiency of the long-exposure image.
Fig. 7 is a schematic structural diagram of an artifact removing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: an acquisition module 710, an adjustment module 720, and a repair module 730.
The acquisition module 710 is configured to, when it is detected that a long exposure image is acquired, acquire a long exposure image through a first camera within a long exposure time, and continuously acquire at least two first short exposure images through a second camera, where an exposure time of the long exposure image is longer than an exposure time of the first short exposure image;
an adjusting module 720, configured to adjust the at least two first short-exposure images according to the brightness of the long-exposure image obtained by the obtaining module 710 to obtain at least two second short-exposure images;
a repairing module 730, configured to repair the artifact region in the long-exposure image according to the at least two second short-exposure images obtained by the adjusting module 720.
Further, the obtaining module 710 is configured to:
determining the overlapping view range of the first camera and the second camera according to angle information in a long exposure time;
and in the overlapped view range, acquiring a long exposure image through the first camera, and continuously acquiring at least two first short exposure images through the second camera.
Further, the obtaining module 710 is configured to:
acquiring a hardware distance between the first camera and the second camera;
and when the hardware distance is greater than a preset correction distance, correcting the long exposure image or the at least two short exposure images.
Further, the repair module 730 is used for
Acquiring the weights of the at least two second short-exposure images;
determining target weight and a clear image according to the weights of the at least two second short-exposure images, wherein the target weight is the higher value of the weights of the at least two second short-exposure images, and the clear image is an image area corresponding to the artifact area in the second short-exposure image corresponding to the target weight;
determining target position information according to the shot object in the second short-exposure image corresponding to the target weight;
overlaying the sharp image to the target location information;
and repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except the clear image in the artifact area.
Further, the repairing module 730 is configured to obtain weights of the at least two second short-exposure images, and includes:
acquiring the posture information of the shot subject in the at least two second short-exposure images;
and determining the weight of each second short-exposure image according to the posture information.
Further, the repairing module 730 is configured to determine the weight of each second short-exposure image according to the posture information, and includes:
counting the frame number of the second short-exposure images with the same body state information;
and determining the weight of each second short-exposure image according to the frame number corresponding to each second short-exposure image.
Further, the repairing module 730 is configured to obtain weights of the at least two second short-exposure images, and includes:
respectively scoring each second short-exposure image;
and determining the weight of the second short-exposure image according to the grading result.
In the artifact removing device provided in the embodiment of the present application, first, when it is detected that a long exposure image is obtained, the obtaining module 710 obtains a long exposure image through a first camera within a long exposure time, and continuously obtains at least two first short exposure images through a second camera; then, the adjusting module 720 adjusts the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images; finally, the repairing module 730 repairs the artifact region in the long-exposure image according to the at least two second short-exposure images. Compared with the situation that long exposure shooting is carried out only by using one camera, the embodiment of the application aims at the mobile terminal with double cameras, can use one camera to carry out long exposure, and use the other camera to continuously obtain at least two first short exposure images during the long exposure period, adjust the brightness of the first short exposure images to obtain second short exposure images, repair artifact areas in the long exposure images according to the at least two continuous second short exposure images, improve the reliability of the long exposure images, avoid repeated shooting and improve the utilization rate of system resources. Meanwhile, both the photographer and the photographed person do not need to be kept strictly still, and the usability of long-exposure photographing is improved. The multi-needle motion of the subject during the long exposure period can be restored by at least two continuous second short exposure images, and the artifact region can be more accurately restored based on the continuous second short exposure images. In addition, the two cameras are used for simultaneously acquiring the long exposure image and the short exposure image, so that the short exposure image for repairing the artifact can be acquired at the same time, and the imaging between the two cameras is not interfered with each other, so that the long exposure image and the short exposure image (the first short exposure image and the second short exposure image) at the same time can be acquired on the mobile terminal with the two cameras, and the artifact can be repaired more efficiently.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, the terminal may include: a housing (not shown), a memory 801, a Central Processing Unit (CPU) 802 (also called a processor, hereinafter referred to as CPU), a computer program stored in the memory 801 and operable on the processor 802, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU802 and the memory 801 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the memory 801 is used for storing executable program codes; the CPU802 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 801.
The terminal further comprises: peripheral interface 803, RF (Radio Frequency) circuitry 805, audio circuitry 806, speakers 811, power management chip 808, input/output (I/O) subsystem 809, touch screen 812, other input/control devices 810, and external port 804, which communicate over one or more communication buses or signal lines 807.
It should be understood that the illustrated terminal device 800 is merely one example of a terminal, and that the terminal device 800 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail a terminal device provided in this embodiment, where the terminal device is a smart phone as an example.
A memory 801, the memory 801 being accessible by the CPU802, the peripheral interface 803, and the like, the memory 801 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 803, said peripheral interface 803 allowing input and output peripherals of the device to be connected to the CPU802 and the memory 801.
I/O subsystem 809, which I/O subsystem 809 may connect input and output peripherals on the device, such as touch screen 812 and other input/control devices 810, to peripheral interface 803. The I/O subsystem 809 may include a display controller 8091 and one or more input controllers 8092 for controlling other input/control devices 810. Where one or more input controllers 8092 receive electrical signals from or transmit electrical signals to other input/control devices 810, other input/control devices 810 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 8092 may be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
The touch screen 812 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operating principle of the touch screen and the classification of media for transmitting information. The touch screen 812 may be classified by installation method: external hanging, internal or integral. Classified according to technical principles, the touch screen 812 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 812, which touch screen 812 is an input interface and an output interface between the user terminal and the user, displays visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 812 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the processor 802.
The display controller 8091 in the I/O subsystem 809 receives electrical signals from the touch screen 812 or sends electrical signals to the touch screen 812. The touch screen 812 detects a contact on the touch screen, and the display controller 8091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 812, that is, implements a human-computer interaction, and the user interface object displayed on the touch screen 812 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 805 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 806 is mainly used to receive audio data from the peripheral interface 803, convert the audio data into an electric signal, and transmit the electric signal to the speaker 811.
Speaker 811 is used to convert the voice signals received by the smart speaker from the wireless network through RF circuit 805 into sound and play the sound to the user.
And the power management chip 808 is used for supplying power and managing power to the hardware connected with the CPU802, the I/O subsystem and the peripheral interface.
In this embodiment, the cpu802 is configured to:
when the fact that a long exposure image is obtained is detected, obtaining a long exposure image through a first camera within a long exposure time, and continuously obtaining at least two first short exposure images through a second camera, wherein the exposure time of the long exposure image is longer than that of the first short exposure images;
adjusting the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images;
and repairing the artifact areas in the long-exposure images according to the at least two second short-exposure images.
Further, the acquiring a long exposure image by the first camera within the long exposure time and continuously acquiring at least two first short exposure images by the second camera includes:
determining the overlapping view range of the first camera and the second camera according to angle information in a long exposure time;
and in the overlapped view range, acquiring a long exposure image through the first camera, and continuously acquiring at least two first short exposure images through the second camera.
Further, after acquiring a long exposure image by the first camera and continuously acquiring at least two first short exposure images by the second camera, the method further includes:
acquiring a hardware distance between the first camera and the second camera;
and when the hardware distance is greater than a preset correction distance, correcting the long exposure image or the at least two short exposure images.
Further, repairing the artifact areas in the long-exposure images according to the at least two second short-exposure images includes:
acquiring the weights of the at least two second short-exposure images;
determining target weight and a clear image according to the weights of the at least two second short-exposure images, wherein the target weight is the higher value of the weights of the at least two second short-exposure images, and the clear image is an image area corresponding to the artifact area in the second short-exposure image corresponding to the target weight;
determining target position information according to the shot object in the second short-exposure image corresponding to the target weight;
overlaying the sharp image to the target location information;
and repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except the clear image in the artifact area.
Further, the acquiring the weights of the at least two second short-exposure images includes:
acquiring the posture information of the shot subject in the at least two second short-exposure images;
and determining the weight of each second short-exposure image according to the posture information.
Further, determining the weight of each second short-exposure image according to the posture information includes:
counting the frame number of the second short-exposure images with the same body state information;
and determining the weight of each second short-exposure image according to the frame number corresponding to each second short-exposure image.
Further, the acquiring the weights of the at least two second short-exposure images includes:
respectively scoring each second short-exposure image;
and determining the weight of the second short-exposure image according to the grading result.
An embodiment of the present application further provides a storage medium containing terminal device executable instructions, where the terminal device executable instructions are executed by a terminal device processor to perform an artifact removing method, and the method includes:
when the fact that a long exposure image is obtained is detected, obtaining a long exposure image through a first camera within a long exposure time, and continuously obtaining at least two first short exposure images through a second camera, wherein the exposure time of the long exposure image is longer than that of the first short exposure images;
adjusting the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images;
and repairing the artifact areas in the long-exposure images according to the at least two second short-exposure images.
Further, the acquiring a long exposure image by the first camera within the long exposure time and continuously acquiring at least two first short exposure images by the second camera includes:
determining the overlapping view range of the first camera and the second camera according to angle information in a long exposure time;
and in the overlapped view range, acquiring a long exposure image through the first camera, and continuously acquiring at least two first short exposure images through the second camera.
Further, after acquiring a long exposure image by the first camera and continuously acquiring at least two first short exposure images by the second camera, the method further includes:
acquiring a hardware distance between the first camera and the second camera;
and when the hardware distance is greater than a preset correction distance, correcting the long exposure image or the at least two short exposure images.
Further, repairing the artifact areas in the long-exposure images according to the at least two second short-exposure images includes:
acquiring the weights of the at least two second short-exposure images;
determining target weight and a clear image according to the weights of the at least two second short-exposure images, wherein the target weight is the higher value of the weights of the at least two second short-exposure images, and the clear image is an image area corresponding to the artifact area in the second short-exposure image corresponding to the target weight;
determining target position information according to the shot object in the second short-exposure image corresponding to the target weight;
overlaying the sharp image to the target location information;
and repairing other areas in the artifact area according to the at least two short-exposure images and the long-exposure image, wherein the other areas are areas except the clear image in the artifact area.
Further, the acquiring the weights of the at least two second short-exposure images includes:
acquiring the posture information of the shot subject in the at least two second short-exposure images;
and determining the weight of each second short-exposure image according to the posture information.
Further, determining the weight of each second short-exposure image according to the posture information includes:
counting the frame number of the second short-exposure images with the same body state information;
and determining the weight of each second short-exposure image according to the frame number corresponding to each second short-exposure image.
Further, the acquiring the weights of the at least two second short-exposure images includes:
respectively scoring each second short-exposure image;
and determining the weight of the second short-exposure image according to the grading result.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the artifact removing operation described above, and may also perform related operations in the artifact removing method provided in any embodiments of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (7)

1. A method of artifact removal, comprising:
when the fact that a long exposure image is obtained is detected, obtaining a long exposure image through a first camera within a long exposure time, and continuously obtaining at least two first short exposure images through a second camera, wherein the exposure time of the long exposure image is longer than that of the first short exposure images;
adjusting the at least two first short-exposure images according to the brightness of the long-exposure image to obtain at least two second short-exposure images;
restoring an artifact area in the long-exposure image according to the at least two second short-exposure images;
repairing artifact areas in the long-exposure images according to the at least two second short-exposure images, comprising:
acquiring the posture information of the shot object in the at least two second short-exposure images;
counting the frame number of the second short-exposure images with the same body state information;
determining the weight of each second short-exposure image according to the frame number corresponding to each second short-exposure image;
determining a target weight according to the weights of the at least two second short-exposure images, wherein the target weight is the highest value in the weights of the at least two second short-exposure images;
repairing the shot object in the second short-exposure image corresponding to the target weight by using other second short-exposure images to obtain a clear image of the shot object;
determining a target position according to the shot object in the second short-exposure image corresponding to the target weight, wherein the target position represents the position of the shot object in the second short-exposure image corresponding to the target weight in the artifact area of the long-exposure image;
covering the clear image to a target position corresponding to the shot object;
and repairing other areas in the artifact area according to the at least two second short-exposure images and the long-exposure image, wherein the other areas are areas except the clear image in the artifact area.
2. The artifact removal method according to claim 1, wherein the acquiring a long exposure image by a first camera during the long exposure time and continuously acquiring at least two first short exposure images by a second camera comprises:
determining the overlapping view range of the first camera and the second camera according to angle information in a long exposure time;
and in the overlapped view range, acquiring a long exposure image through the first camera, and continuously acquiring at least two first short exposure images through the second camera.
3. The artifact removal method according to claim 1, further comprising, after acquiring a long exposure image by the first camera and continuously acquiring at least two first short exposure images by the second camera:
acquiring a hardware distance between the first camera and the second camera, wherein the hardware distance is a distance between the setting positions of the two cameras on the mobile terminal;
and when the hardware distance is greater than a preset correction distance, correcting the long exposure image or the at least two first short exposure images.
4. The artifact removal method according to claim 1, wherein the obtaining weights of the at least two second short-exposure images comprises:
respectively scoring each second short-exposure image;
and determining the weight of the second short-exposure image according to the grading result.
5. An artifact removal apparatus, comprising:
the acquisition module is used for acquiring a long exposure image through the first camera within the long exposure time and continuously acquiring at least two first short exposure images through the second camera when the acquisition of the long exposure image is detected;
the adjusting module is used for adjusting the at least two first short-exposure images according to the brightness of the long-exposure image acquired by the acquiring module to obtain at least two second short-exposure images;
the repairing module is used for repairing artifact areas in the long-exposure images according to the at least two second short-exposure images obtained by the adjusting module;
the restoration module is used for acquiring the posture information of the shot object in the at least two second short-exposure images;
counting the frame number of the second short-exposure images with the same body state information;
determining the weight of each second short-exposure image according to the frame number corresponding to each second short-exposure image;
determining a target weight according to the weights of the at least two second short-exposure images, wherein the target weight is the highest value in the weights of the at least two second short-exposure images;
repairing the shot object in the second short-exposure image corresponding to the target weight by using other second short-exposure images to obtain a clear image of the shot object;
determining a target position according to the shot object in the second short-exposure image corresponding to the target weight, wherein the target position represents the position of the shot object in the second short-exposure image corresponding to the target weight in the artifact area of the long-exposure image;
covering the clear image to a target position corresponding to the shot object;
and repairing other areas in the artifact area according to the at least two second short-exposure images and the long-exposure image, wherein the other areas are areas except the clear image in the artifact area.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the artifact removal method according to any one of claims 1 to 4.
7. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the artifact removal method according to any of claims 1-4 when executing the computer program.
CN201810936334.8A 2018-08-16 2018-08-16 Artifact eliminating method and device, storage medium and terminal Active CN109040524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810936334.8A CN109040524B (en) 2018-08-16 2018-08-16 Artifact eliminating method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810936334.8A CN109040524B (en) 2018-08-16 2018-08-16 Artifact eliminating method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN109040524A CN109040524A (en) 2018-12-18
CN109040524B true CN109040524B (en) 2021-06-29

Family

ID=64631761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810936334.8A Active CN109040524B (en) 2018-08-16 2018-08-16 Artifact eliminating method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN109040524B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110958401B (en) * 2019-12-16 2022-08-23 北京迈格威科技有限公司 Super night scene image color correction method and device and electronic equipment
CN114930799B (en) * 2020-01-09 2024-02-20 Oppo广东移动通信有限公司 Method for electronic device with multiple cameras and electronic device
CN113472996B (en) * 2020-03-31 2022-11-22 华为技术有限公司 Picture transmission method and device
CN111970447B (en) * 2020-08-25 2021-12-21 云谷(固安)科技有限公司 Display device and mobile terminal
CN112153291B (en) * 2020-09-27 2022-09-06 维沃移动通信有限公司 Photographing method and electronic equipment
CN112689099B (en) * 2020-12-11 2022-03-22 北京邮电大学 Double-image-free high-dynamic-range imaging method and device for double-lens camera

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222584A (en) * 2007-01-12 2008-07-16 三洋电机株式会社 Apparatus and method for blur detection, and apparatus and method for blur correction
CN101637019A (en) * 2007-03-09 2010-01-27 伊斯曼柯达公司 Multiple lens camera providing a range map
CN103945145A (en) * 2013-01-17 2014-07-23 三星泰科威株式会社 Apparatus and method for processing image
CN104052905A (en) * 2013-03-12 2014-09-17 三星泰科威株式会社 Method and apparatus for processing image
CN104349069A (en) * 2013-07-29 2015-02-11 广达电脑股份有限公司 Method for shooting high dynamic range film
JP2015082675A (en) * 2013-10-21 2015-04-27 三星テクウィン株式会社Samsung Techwin Co., Ltd Image processing device and image processing method
CN105323425A (en) * 2014-05-30 2016-02-10 苹果公司 Scene motion correction in fused image systems
CN106331513A (en) * 2016-09-06 2017-01-11 深圳美立知科技有限公司 Method and system for acquiring high-quality skin image
CN107820022A (en) * 2017-10-30 2018-03-20 维沃移动通信有限公司 A kind of photographic method and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460492B2 (en) * 2013-05-10 2016-10-04 Hanwha Techwin Co., Ltd. Apparatus and method for image processing
US9641820B2 (en) * 2015-09-04 2017-05-02 Apple Inc. Advanced multi-band noise reduction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222584A (en) * 2007-01-12 2008-07-16 三洋电机株式会社 Apparatus and method for blur detection, and apparatus and method for blur correction
CN101637019A (en) * 2007-03-09 2010-01-27 伊斯曼柯达公司 Multiple lens camera providing a range map
CN103945145A (en) * 2013-01-17 2014-07-23 三星泰科威株式会社 Apparatus and method for processing image
CN104052905A (en) * 2013-03-12 2014-09-17 三星泰科威株式会社 Method and apparatus for processing image
CN104349069A (en) * 2013-07-29 2015-02-11 广达电脑股份有限公司 Method for shooting high dynamic range film
JP2015082675A (en) * 2013-10-21 2015-04-27 三星テクウィン株式会社Samsung Techwin Co., Ltd Image processing device and image processing method
CN105323425A (en) * 2014-05-30 2016-02-10 苹果公司 Scene motion correction in fused image systems
CN106331513A (en) * 2016-09-06 2017-01-11 深圳美立知科技有限公司 Method and system for acquiring high-quality skin image
CN107820022A (en) * 2017-10-30 2018-03-20 维沃移动通信有限公司 A kind of photographic method and mobile terminal

Also Published As

Publication number Publication date
CN109040524A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
US11838650B2 (en) Photographing using night shot mode processing and user interface
CN109040524B (en) Artifact eliminating method and device, storage medium and terminal
EP3579544B1 (en) Electronic device for providing quality-customized image and method of controlling the same
CN113454982B (en) Electronic device for stabilizing image and method of operating the same
CN109167931B (en) Image processing method, device, storage medium and mobile terminal
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
JP6924901B2 (en) Photography method and electronic equipment
CN109726064B (en) Method, device and system for simulating abnormal operation of client and storage medium
KR20110006243A (en) Apparatus and method for manual focusing in portable terminal
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN108683858A (en) It takes pictures optimization method, device, storage medium and terminal device
CN109302563B (en) Anti-shake processing method and device, storage medium and mobile terminal
CN109120864B (en) Light supplement processing method and device, storage medium and mobile terminal
CN110868533A (en) HDR mode determination method, device, storage medium and terminal
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN108540726B (en) Method and device for processing continuous shooting image, storage medium and terminal
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN109246345B (en) Beautiful pupil shooting method and device, storage medium and mobile terminal
CN111028192B (en) Image synthesis method and electronic equipment
CN113065457A (en) Face detection point processing method and device, computer equipment and storage medium
CN115205964A (en) Image processing method, apparatus, medium, and device for pose prediction
KR20200098029A (en) Screen providing method and electronic device supporting the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant