CN109120853B - Long exposure image shooting method and terminal - Google Patents

Long exposure image shooting method and terminal Download PDF

Info

Publication number
CN109120853B
CN109120853B CN201811133549.2A CN201811133549A CN109120853B CN 109120853 B CN109120853 B CN 109120853B CN 201811133549 A CN201811133549 A CN 201811133549A CN 109120853 B CN109120853 B CN 109120853B
Authority
CN
China
Prior art keywords
image
area
input
target
exposure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811133549.2A
Other languages
Chinese (zh)
Other versions
CN109120853A (en
Inventor
彭俊华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811133549.2A priority Critical patent/CN109120853B/en
Publication of CN109120853A publication Critical patent/CN109120853A/en
Application granted granted Critical
Publication of CN109120853B publication Critical patent/CN109120853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a long-exposure image shooting method and a terminal, relates to the technical field of communication, and aims to solve the problem that the quality of an image shot by the terminal is unstable. The method comprises the following steps: receiving a first input of a user, wherein the first input is used for triggering a terminal to acquire an image within an exposure duration; displaying a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; after the first time, updating the first target image displayed in the first area into a second target image; the second target image is an image obtained by combining the first target image and M second acquired images, the M second images are acquired within a first time period, the first time period is less than the target exposure time period, and N, M are all positive integers.

Description

Long exposure image shooting method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a long-exposure image shooting method and a terminal.
Background
With the development of terminal technology, the shooting function in the terminal is more and more powerful, and more terminals are provided with professional shooting modes.
The user can photograph a long-exposure image at night using a professional photographing mode of the terminal. First, the user can set the exposure duration; then, after receiving the operation of triggering shooting by the user, the terminal can start to shoot the image according to the exposure duration set by the user, and stop shooting until the shooting duration reaches the set exposure duration; after the shooting is completed, the terminal may combine the plurality of shot images into one image and display the image on the display screen. Currently, in the process of shooting a long exposure image, a terminal can display the shot image through the following two display modes: one is that the terminal displays the instantaneous picture being taken on the display screen, and the other is that the terminal always displays the first image after the shutter is pressed on the display screen.
However, in the two display modes, since the user cannot observe the effect of image exposure during the shooting process, if the exposure duration set by the user is too long, the final synthesized image may be overexposed (i.e. overexposed) after the exposure duration, which may cause the quality of the image shot by the terminal to be unstable.
Disclosure of Invention
The embodiment of the invention provides a long-exposure image shooting method and a terminal, and aims to solve the problem that the quality of an image shot by the terminal is unstable.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a long-exposure image shooting method, which receives a first input of a user, where the first input is used to trigger a terminal to acquire an image within a target exposure duration; responding to the first input, and displaying a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; after a first time length, updating the first target image displayed in the first area into a second target image; the second target image is an image obtained by combining the first target image and M second acquired images, the M second images are acquired within the first time period, the first time period is less than the target exposure time period, and N, M are positive integers.
In a second aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a receiving module and a display module; the receiving module is used for receiving a first input of a user, wherein the first input is used for triggering the terminal to acquire an image within a target exposure duration; the display module is used for responding to the first input received by the receiving module and displaying a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; after a first time length, updating the first target image displayed in the first area into a second target image; the second target image is an image obtained by combining the first target image and M second acquired images, the M second images are acquired within the first time period, the first time period is less than the target exposure time period, and N, M are positive integers.
In a third aspect, an embodiment of the present invention provides a terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the long-exposure image capturing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the long-exposure image capturing method according to the first aspect.
In the embodiment of the invention, a terminal receives a first input of a user, wherein the first input is used for triggering the terminal to acquire an image within a target exposure duration; responding to the first input, the terminal displays a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; and after the first time, the terminal updates the first target image displayed in the first area into a second target image. The second target image is an image obtained by combining the first target image and the acquired M second images, and the M second images are images acquired by the terminal within the first time length, so that the terminal can update and display the combined images in the first area according to the increase of the exposure time, and further, a user can observe whether the shot images are over-exposed in the shooting process, and judge whether to terminate the shooting process according to the display effect of the displayed second target images, thereby avoiding the problem of unstable quality of the images shot by the terminal.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a long-exposure image capturing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a display interface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another display interface provided in the embodiment of the present invention;
FIG. 5 is a schematic diagram of another display interface provided in the embodiment of the present invention;
FIG. 6 is a schematic diagram of another display interface provided in the embodiment of the present invention;
FIG. 7 is a schematic diagram of another display interface provided in the embodiment of the present invention;
fig. 8 is a schematic diagram of a possible structure of a terminal according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a possible structure of another terminal according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of another possible terminal according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another possible terminal according to an embodiment of the present invention;
fig. 12 is a schematic hardware structure diagram of a terminal according to various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first region and the second region, etc. are for distinguishing different regions, and are not for describing a particular order of the regions.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terminal in the embodiment of the present invention may be a terminal having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the long-exposure image shooting method provided by the embodiment of the invention, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the long-exposure image shooting method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the long-exposure image shooting method may be run based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can realize the long-exposure image shooting method provided by the embodiment of the invention by running the software program in the android operating system.
A long-exposure image capturing method according to an embodiment of the present invention will be described below with reference to fig. 2. Fig. 2 is a schematic flowchart of a long-exposure image capturing method according to an embodiment of the present invention, and as shown in fig. 2, the long-exposure image capturing method includes steps 201 and 203:
step 201, the terminal receives a first input of a user.
The first input is used for triggering the terminal to acquire an image within the target exposure duration.
Step 202, responding to the first input, the terminal displays the first target image in the first area.
The first target image is an image synthesized according to the acquired N first images, and N is a positive integer.
It will be appreciated that the N first images are acquired by the terminal prior to displaying the first target image.
It is understood that the first image may be a single frame image captured by the terminal, and the first target image may be a long exposure image cumulatively synthesized from the N first images.
It should be noted that, when the terminal captures a long-exposure image, the terminal may collect and synthesize one frame of image in the background once, or collect and synthesize multiple frames once, which is not specifically limited in this embodiment of the present invention. Assuming that the target image is a synthesized long-exposure image, when the terminal synthesizes one collected frame image once, the first target image is a collected first frame image, the second target image is an image synthesized by the collected first frame image and the collected second frame image, and the third target image is an image synthesized by the second target image and the collected third frame image, that is, the nth target image is an image synthesized by the (N-1) th target image and the collected nth frame image.
It should be noted that the terminal provided in the embodiment of the present invention may have a touch screen, and the touch screen may be configured to receive an input from a user and display a content corresponding to the input to the user in response to the input. The first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, hover input (input by a user near the touch screen) of the touch screen of the terminal by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint recognizer of the terminal. The gravity input is input such as shaking of a user in a specific direction of the terminal, shaking of a specific number of times, and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a power key, a volume key, a Home key, and the like of the terminal. Specifically, the embodiment of the present invention does not specifically limit the manner of the first input, and may be any realizable manner.
Optionally, the target exposure duration in the embodiment of the present invention may be an exposure duration manually set by a user, or may be an exposure duration default set in the terminal, which is not specifically limited in the embodiment of the present invention.
Generally, when a user uses a terminal to shoot a long-exposure image, the exposure time duration may be set to 8S, 16S, or 32S, and of course, the user may also set the exposure time durations of other time durations according to an empirical value.
Step 203, after the first time period, the terminal updates the first target image displayed in the first area to a second target image.
The second target image is an image synthesized according to the first target image and the acquired M second images, the M second images are images acquired within a first time length, the first time length is smaller than the target exposure time length, and M is a positive integer.
It should be noted that the first time duration is a time duration after the terminal displays the first target image.
Optionally, the first duration may be a duration for the terminal to acquire one image, when M is equal to 1, the terminal acquires one image, the terminal synthesizes one target image according to the acquired image and the last synthesized target image, and the terminal updates the target image once.
Optionally, the first duration may be a duration for acquiring a plurality of images by the terminal, and when M is greater than 1, after the terminal acquires M images, the terminal synthesizes a second target image according to the acquired M images and the first target image, that is, after the M images and the first target image are acquired for multiple times, the target image is updated once in the first area.
It should be noted that the first time length may be a time length for the terminal to capture one image, or may also be a time length for the terminal to capture multiple images continuously, which is not specifically limited in the embodiment of the present invention.
According to the long exposure image shooting method provided by the embodiment of the invention, a terminal receives a first input of a user, wherein the first input is used for triggering the terminal to acquire an image within a target exposure duration; responding to the first input, the terminal displays a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; and after the first time, the terminal updates the first target image displayed in the first area into a second target image. The second target image is an image obtained by combining the first target image and the acquired M second images, and the M second images are images acquired by the terminal within the first time length, so that the terminal can update and display the combined images in the first area according to the increase of the exposure time, and further, a user can observe whether the shot images are over-exposed in the shooting process, and judge whether to terminate the shooting process according to the display effect of the displayed second target images, thereby avoiding the problem of unstable quality of the images shot by the terminal.
A possible implementation manner of the long-exposure image capturing method provided in the embodiment of the present invention further includes steps 204 and 205:
and step 204, the terminal receives a second input of the user.
And step 205, responding to the second input, stopping acquiring the image by the terminal, and displaying a third target image in the first area.
The third target image is an image synthesized according to all the acquired images within a second time length, the second time length is the time length from the moment of starting to acquire the images to the moment of stopping to acquire the images, the second time length is less than or equal to the target exposure time length, and the first time length is less than or equal to the second time length.
It should be noted that, when the terminal acquires an image, and the duration of exposure has reached the target exposure duration, the terminal may also stop acquiring the image.
It should be noted that, in the embodiment of the present invention, the step 204 may be executed after the step 201, or may be executed after the step 203, that is, the step 204 may be executed at any time before the terminal receives the first input until the exposure time reaches the target exposure time and stops shooting, and this is not particularly limited in the embodiment of the present invention.
For example, assuming that the exposure duration set by the user is 30S (i.e., the target exposure duration), when the user determines that the exposure amount of the image has met the requirement according to the second target image displayed in the first region at 20S after starting shooting, the quality of the image is good, and the user may actively control the terminal to stop capturing the image if the user determines that the continuous shooting may cause overexposure of the captured image according to experience.
It can be understood that, if the user determines that the acquired image meets the exposure requirement, the user can actively stop acquiring the image, so that the image with better exposure quality can be acquired as required, and the shooting experience of the user is improved.
In a possible implementation manner, the long-exposure image capturing method provided in the embodiment of the present invention, after step S201, may further include step 206 and step 207:
and step 206, responding to the first input, and displaying the first preview image in the second area by the terminal.
Optionally, the first area and the second area are areas on the same display screen.
It should be noted that, when the first area and the second area are areas on the same display screen, the second area may be a sub-area in the first area, or may be an area adjacent to the first area, which is not specifically limited in this embodiment of the present invention.
Optionally, the first area and the second area are areas on different display screens.
It should be noted that, if the terminal is a terminal with two display screens, after the user triggers the terminal to start acquiring an image within the target exposure duration, the terminal may display the target image on the first display screen and display the preview image on the second display screen. The first display screen and the second display screen are any one of the two display screens in the terminal.
It can be understood that the terminal can select to display the preview image and the synthesized image in different areas on one display screen, or select to display the preview image and the synthesized image in two display screens respectively, so that the user can observe the image quality and whether a single-frame image influencing the synthesized image quality exists or not according to habits of different users.
It should be noted that, after the terminal acquires the preview image displayed in the second area, the preview image may be cached in the terminal, and after the shooting is completed, the terminal device may store the preview image, and may also clear the cache of the preview image.
And step 207, after the third duration, the terminal updates the first preview image displayed in the second area to the second preview image.
Wherein the third duration is less than or equal to the first duration.
It should be noted that, in the embodiment of the present invention, the third time duration may be a time duration for acquiring one frame of image by the terminal, or may also be a time duration for acquiring multiple frames of images, which is not specifically limited in this embodiment of the present invention.
When the third time length is a time length for acquiring one frame of image by the terminal, the second image is the first preview image acquired within a preset time length (i.e., M is 1).
And when the third time length is the time length for acquiring the multi-frame images by the terminal, the image synthesized by the terminal at the current moment is synthesized by the last synthesized image and the multi-frame images acquired in the time length after the last synthesized image.
It should be noted that the terminal may collect the preview image all the time in the third time period, and update the first preview image displayed in the second area to the second preview image only after the third time period, where of course, when the third time period is equal to the first time period, the terminal may preview one image and collect one preview image, and update and display the preview image in the second area.
For example, when the third duration is equal to the first duration, the user may determine in real time whether the preview image displayed in the second area may cause overexposure of an area in the image, for example, when the terminal is shooting a night scene, if a strong light enters the picture within the target exposure duration, the captured image may be overexposed at the position of the strong light, thereby affecting the overall quality of the captured image.
It should be noted that, in the long-exposure image capturing method provided by the embodiment of the present invention, after the terminal device starts to capture an image, the preview image displayed in the second display area is updated and displayed every third time period.
Fig. 3 is a schematic diagram of a display interface according to an embodiment of the present invention. When the first area and the second area are areas of the same display screen, as shown in fig. 3 (a), an area 31 in the display screen is the first area, and an area 32 is the second area, and after receiving a first input from a user, the terminal displays a first target image in the area 31 and a preview image in the area 32. When the first area and the second area are areas in different display screens, then as shown in fig. 3 (b), the area 33 in the display screen on the right is the first area, and the area 34 in the display screen on the left is the second area. Of course, in the interface shown in fig. 3 (b), the area 33 in the right display screen may be the second area, and the area 34 in the left display screen may be the first area.
Based on the scheme, the terminal can display not only the first target image in the first area but also the first preview image in the second area after receiving the first input of the user. And updating the first preview image displayed in the second area into a second preview image by the terminal after the exposure time length exceeds a third time length along with the change of time. Therefore, the user can determine whether or not strong light enters the photographing region according to the change of the displayed preview image, and thus can determine whether or not the synthesized image is likely to be overexposed.
In one possible implementation, the first area includes a progress bar displaying the exposure time; after step 201, the long-exposure image shooting method provided in the embodiment of the present invention may further include step 208 and step 209:
in step 208, the terminal receives a third input of the user on the progress bar.
It should be noted that the user can slide on the progress bar, the length of the progress bar can indicate the target exposure time length, and the progress button on the progress bar can indicate the time length of exposure.
Specifically, the third input may be an input of dragging the progress bar by the user, and through the third input, the user may check whether the long-exposure image synthesized by a single image taken at any time before the current time meets the requirement.
It is to be understood that the third input may be an input performed by a user during the process of acquiring an image by the terminal, or may also be an input performed by the user after the image is acquired by the terminal, which is not specifically limited in this embodiment of the present invention.
And step 209, the terminal responds to the third input, updates the image displayed in the first area to the fourth target image, and updates the image displayed in the second area to the third image.
The fourth target image is an image synthesized according to images acquired from the starting time to the first time, and the first time is an exposure time corresponding to the third input on the progress bar.
Illustratively, if the user moves the time indicated in the progress bar from exposure time 1 to exposure time 2, the exposure time corresponding to the third input is exposure time 2.
Based on the scheme, after receiving the third input of the user, the terminal can display the synthesized image (namely, the fourth target image) corresponding to the exposure time corresponding to the third input of the user in the first area, and display the acquired single-frame image in the second area, so that the user can determine the quality of the synthesized image according to the images displayed in the respective areas.
In a possible implementation manner, the long-exposure image capturing method provided in the embodiment of the present invention, after step 208, may further include step 210:
step 210, in response to a third input, displaying at least one option in the first area.
Wherein each option is used for indicating the processing operation of the image displayed by one first area or second area respectively, and the processing operation comprises at least one of the following operations: deleting the preview image currently displayed in the second area, deleting at least one preview image acquired within the target time length, deleting one preview image acquired by a preset time node, and storing the image currently displayed in the first area.
It should be noted that, in the embodiment of the present invention, the preset time node may be a time node selected by a user.
By way of example, fig. 4 is a schematic diagram of a display interface provided by an embodiment of the present invention, taking as an example that a preview image and a composite image are respectively displayed in two screens of a folding screen, as shown in (a) of fig. 4, the display area 41 in the left screen is the second area, and the display area 42 in the right screen is the first area, and in conjunction with (a) of fig. 4, assuming that time T1 is the time when the user slides the time axis to time T2, time T2 is earlier than time T1, the terminal displays the preview image 2 in the left screen, displays the composite image 2 in the right screen, here, each of option 45, option 46, option 47, and option 48 in the right screen may indicate "delete this frame", "select this frame", "delete start", "delete end", "save", or "end exposure", respectively. Preview image 1 is a preview image captured at time T1 at the end, and preview image 2 is a preview image captured at time T2 at the end.
It should be noted that "delete this frame" is used to delete the currently displayed preview image in the second area; the 'deletion start' and 'deletion end' are used for deleting the images collected in the target duration; the "selecting this frame" and "deleting" may be used to delete a discontinuous image (i.e., a preview image captured by each of a plurality of preset time nodes) of the selected plurality of frames, and the "saving" or "ending exposure" may be used to store the currently displayed image of the first area.
It should be noted that fig. 4 is only an exemplary illustration, and in practical applications, the "option" may be a text description or a control, the number of the options may be multiple or one, and this is not specifically limited in this embodiment of the present invention.
Of course, in practical applications, the option may be always displayed in the first area, or may be triggered and displayed in the first area by a third input of the user, which is not specifically limited in this embodiment of the present invention.
For convenience of understanding, in the following, taking a car light trajectory taken at night as an example, fig. 5 is a schematic diagram of a display interface provided by an embodiment of the present invention, where, as shown in (a) in fig. 5, it is assumed that a display area 51 in a left screen is a first area, a display area 52 in a right screen is a second area, a light trajectory 55 synthesized by lights is displayed in the first area, a car is not displayed in the synthesized image, in the second area, a real-time preview image is displayed, in the second area, a car can be seen, a user can slide back and forth through a slide button 54 on a progress bar 53 displayed in the first area, and assuming that a current position of the slide button 54 indicates a time T1, the user control 54 moves to a position where the indicated time is a time T2, where a bundle of lights 52a appears in the preview image in the second area corresponding to the time T2, this glare may cause a portion of the composite image to be overexposed, which, after the user drags the slider button 54, the terminal device may display option 56, option 57, option 58 and option 59 in the first area, where option 56 is used to indicate "culling this frame," if the user uses option 56 at time T1 indicated by the progress bar, the terminal may delete the single frame image corresponding to the time T2, option 57 for indicating "reject start", option 58 for indicating "reject end", assuming that the user uses option 57 at the time T2 indicated by the progress bar, using option 58 at time T1 indicated by the progress bar, the terminal may delete all single frame images captured between time T2 and time T1, option 59 indicating "end exposure", when the user selects the use option 59 at time T2, the terminal may store the composite image displayed in the first display region.
It should be noted that, in the above example, only for convenience of understanding, in an actual application, other texts or controls representing the same functions may also be displayed, and the embodiment of the present invention is not limited to this specifically.
Based on the scheme, after the terminal receives a third input of the user, the terminal can display a synthesized image corresponding to an exposure time point selected by the third input of the user in the first area, display a single-frame image collected at the exposure time point in the second area, so that the user can determine the quality of the synthesized image according to the image displayed in each area, and display at least one option in the first area, so that the user can check the effect of the synthesized image through the first area, and after checking whether the single-frame image influences the effect of the synthesized image through the second area, the user can modify and edit the synthesized image through each option.
In a possible implementation manner, the long-exposure image capturing method provided in the embodiment of the present invention further includes, after step 201, step 211 and step 212, or step 211 and step 213:
in step 211, the terminal detects the exposure amount of the synthesized image displayed in the first region.
It should be noted that the "synthesized image" may include an image synthesized after each acquisition of the terminal and an image synthesized after the last acquisition.
Optionally, the terminal may determine whether there is an overexposure in a partial region of the nth image by detecting the nth combined image and the (N-1) th combined image in the terminal, and may also determine whether there is an overexposed region in one image according to a pixel or image histogram of the image, or whether the entire image is overexposed.
In the case that the exposure amount is greater than or equal to the first exposure threshold value, the terminal displays a target mark at a target position on the progress bar, step 212.
The target mark is used for indicating that the synthesized image is abnormal, and the target position is the corresponding exposure time of the image indicating the abnormal occurrence on the sliding area.
It should be noted that an exposure amount greater than or equal to the first exposure threshold may indicate that the combined image is overexposed.
Optionally, the occurrence of the anomaly in the synthesized image may include overexposure of the synthesized image or underexposure of the synthesized image, which is not specifically limited by the embodiment of the present invention.
And step 213, under the condition that the exposure time length is longer than the fourth time length and the exposure amount is less than or equal to the second exposure threshold value, the terminal displays the target mark at the target position on the progress bar.
It should be noted that the fourth time period in the embodiment of the present invention may indicate that exposure is about to be finished, and after the fourth time period, if it is detected that the exposure amount is less than or equal to the second exposure threshold, it indicates that the exposure amount of the synthesized image is insufficient, and it is necessary to extend the exposure time period, at this time, "underexposure" may be displayed at the target position on the progress bar, so as to prompt the user to extend the exposure time period.
It is to be understood that if the user selects to extend the exposure time period, the progress bar may indicate the extended target exposure time period.
Optionally, the first exposure threshold and the second exposure threshold may be thresholds set by a user, or may also be thresholds configured in the terminal, which is not specifically limited in this embodiment of the present invention.
Optionally, the target identifier may be a text, a border color, and an added mark, which is not specifically limited in this embodiment of the present invention.
Fig. 6 is an image display schematic diagram provided by an embodiment of the present invention, as shown in fig. 6, for convenience of description, only a first region is illustrated in fig. 6, that is, a region 61 is a first region, and when the terminal determines that the exposure amount of the target image corresponding to the time T1 is equal to a first threshold, the terminal adds a mark, for example, a black circular region in the drawing, at a position on the progress bar corresponding to the time T1, for indicating that the image starts to be abnormal from the time T1.
Based on the scheme, the terminal can detect the exposure amount of the synthesized image, and when the exposure amount is greater than or equal to the first exposure threshold value, or when the exposure duration is greater than the fourth duration and the exposure amount is less than or equal to the second exposure threshold value, the terminal displays a target mark at the target position on the sliding region, so as to prompt the user that the synthesized image is abnormal at the time point of the exposure, and facilitate the user to position the image for editing.
In a possible implementation manner, the long-exposure image capturing method provided in the embodiment of the present invention further includes steps 214 and 215:
and step 214, the terminal receives a fourth input of the target option from the user.
Wherein the target option is one of the at least one option.
Step 215, in response to the fourth input, the terminal performs the function indicated by the target option.
Alternatively, fig. 7 is a schematic diagram of a display interface provided in an embodiment of the present invention, and for convenience of explanation, only contents displayed in the first area are shown, as shown in the interface 71, and assuming that a user determines that a trajectory of light in the first 2 frames of images in the captured images is not stable, after the terminal receives a third input from the user in the sliding area, the terminal may display (a) in fig. 7, where the captured light is a heart shape, but there is jitter in the first two frames, resulting in an area where the synthesized heart shape intersects, and the synthesis effect is poor, so that the user may delete the first frame and the second frame of images according to a set of options, that is, "delete start" option 76, "delete end" option 77, "save" option 78, "delete the first frame and the second frame of images according to a set of options 76 and 77 (i.e., a set of multi-frame images captured between times T0 and T1), the terminal may synthesize a new image from the remaining images, and the result may be an image shown in (b) of fig. 7, in which the effect after the synthesis displayed in the interface 72 in (b) of fig. 7 is better than the effect of the image synthesized in the interface 71.
Based on the scheme, the user can operate the target option displayed by the terminal, so that the user can edit the synthesized image in the acquisition process or after the acquisition is finished, or delete the image for synthesis, and the quality of the image acquired by the terminal can be controlled.
In one possible implementation, the functions are: deleting the currently displayed image in the second area or deleting the image collected in the target duration; the long exposure image shooting method provided by the embodiment of the invention further comprises a step 216 after the step 215:
and step 216, after the terminal stops acquiring the images, the terminal performs fitting compensation on the synthesized images.
It should be noted that when the user deletes the currently displayed image in the second area or deletes the image captured within the target time period, the motion trajectory or light of the object in the image may be interrupted, that is, the motion trajectory or light may be discontinuous, for example, if the preview image captured within a certain time period is deleted in (a) in fig. 5, there may be a discontinuity of the vehicle light trajectory.
In the embodiment of the invention, the acquired image is deleted at the terminal, and after the acquisition is stopped, the terminal can perform fitting compensation on the synthesized image to connect the cut-off light rays or tracks, so that the obtained image has a better effect, and the use experience of a user is improved.
Fig. 8 is a schematic diagram of a possible structure of a terminal according to an embodiment of the present invention, and as shown in fig. 8, a terminal 800 includes: a receiving module 801 and a display module 802; a receiving module 801, configured to receive a first input of a user, where the first input is used to trigger a terminal to acquire an image within a target exposure duration; a display module 802 for: in response to a first input received by the receiving module 801, displaying a first target image in a first area, where the first target image is an image synthesized from the N acquired first images; after the first time, updating the first target image displayed in the first area into a second target image; the second target image is an image obtained by combining the first target image and the acquired M second images, the M second images are images acquired within a first time length, the first time length is less than the target exposure time length, and N, M are positive integers.
Optionally, with reference to fig. 8, as shown in fig. 9, the terminal 800 further includes an acquisition module 803; a receiving module 801, configured to receive a second input from the user after receiving the first input from the user; an acquisition module 803, configured to stop acquiring the image at the second input received by the receiving module 801; a display module 802, further configured to display a third target image in the first area; the third target image is an image synthesized according to all the acquired images within a second time length, the second time length is the time length from the moment of starting to acquire the images to the moment of stopping to acquire the images, the second time length is less than or equal to the target exposure time length, and the first time length is less than or equal to the second time length.
Optionally, the display module 802 is further configured to: after the receiving module 801 receives a first input of a user, in response to the first input, displaying a first preview image in a second area; after a third time length, updating the first preview image displayed in the second area into a second preview image; wherein the third duration is less than or equal to the first duration.
Optionally, the first area includes a progress bar showing an exposure time; the receiving module 801 is further configured to receive a third input of the user on the progress bar after receiving the first input of the user; a display module 802, further configured to update the image displayed in the first area to a fourth target image and update the image displayed in the second area to a third image in response to a third input received by the receiving module 801; the fourth target image is an image synthesized according to images acquired from the starting time to the first time, and the first time is an exposure time corresponding to the third input on the progress bar.
Optionally, the display module 802 is further configured to, after the receiving module 801 receives a third input of the user on the progress bar, display at least one option in the first area in response to the third input; wherein each option is respectively used for indicating the processing operation of the image displayed by one first area or second area, and the processing operation comprises at least one of the following operations: deleting the preview image currently displayed in the second area, deleting the preview image acquired within the target time length, deleting one preview image acquired by a preset time node, and storing the image currently displayed in the first area.
Optionally, with reference to fig. 8, as shown in fig. 10, the first area includes a progress bar displaying the exposure time; the terminal 800 further comprises a detection module 804; a detecting module 804, configured to detect an exposure amount of the synthesized image displayed in the first area after the receiving module 801 receives the first input of the user; the display module 802 is further configured to display a target identifier at a target position on the progress bar when the exposure amount detected by the detection module 804 is greater than or equal to the first exposure threshold, or when the exposure duration is greater than the fourth duration and the exposure amount is less than or equal to the second exposure threshold; the target mark is used for indicating that the synthesized image is abnormal, and the target position is the exposure time corresponding to the image indicating the abnormality starting to appear on the sliding area.
Optionally, with reference to fig. 8, as shown in fig. 11, the terminal 800 further includes a processing module 805; a receiving module 801, configured to receive a fourth input of the target option from the user, where the target option is one of the at least one option; a processing module 805, configured to execute the function indicated by the target option in response to the fourth input received by the receiving module 801.
Optionally, the first area and the second area are areas on the same display screen; alternatively, the first area and the second area are areas on different display screens.
The terminal 800 provided in the embodiment of the present invention can implement each process implemented by the terminal in the foregoing method embodiments, and is not described here again to avoid repetition.
According to the terminal provided by the embodiment of the invention, the terminal receives a first input of a user, and the first input is used for triggering the terminal to acquire an image within a target exposure duration; responding to the first input, the terminal displays a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; and after the first time, the terminal updates the first target image displayed in the first area into a second target image. The second target image is an image obtained by combining the first target image and the acquired M second images, and the M second images are images acquired by the terminal within the first time length, so that the terminal can update and display the combined images in the first area according to the increase of the exposure time, and further, a user can observe whether the shot images are over-exposed in the shooting process, and judge whether to terminate the shooting process according to the display effect of the displayed second target images, thereby avoiding the problem of unstable quality of the images shot by the terminal.
Fig. 12 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention, where the terminal 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal configuration shown in fig. 12 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The sensor 105 is configured to receive a first input of a user, where the first input is used to trigger the terminal to acquire an image within a target exposure duration; a display unit 106 configured to display a first target image in the first area in response to a first input, the first target image being an image synthesized from the N acquired first images; after the first time, updating the first target image displayed in the first area into a second target image; the second target image is an image synthesized by the first target image and the acquired M second images, the M second images are images acquired within a first time length, the first time length is smaller than the target exposure time length, and both N and M are positive integers.
According to the terminal provided by the embodiment of the invention, the terminal receives a first input of a user, and the first input is used for triggering the terminal to acquire an image within a target exposure duration; responding to the first input, the terminal displays a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; and after the first time, the terminal updates the first target image displayed in the first area into a second target image. The second target image is an image obtained by combining the first target image and the acquired M second images, and the M second images are images acquired by the terminal within the first time length, so that the terminal can update and display the combined images in the first area according to the increase of the exposure time, and further, a user can observe whether the shot images are over-exposed in the shooting process, and judge whether to terminate the shooting process according to the display effect of the displayed second target images, thereby avoiding the problem of unstable quality of the images shot by the terminal.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse web pages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 12, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 100 or may be used to transmit data between the terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the terminal 100 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
Optionally, an embodiment of the present invention further provides a terminal, which, with reference to fig. 12, includes a processor 110, a memory 109, and a computer program that is stored in the memory 109 and is executable on the processor 110, and when the computer program is executed by the processor 110, the computer program implements each process of the long-exposure image shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the long-exposure image shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A long exposure image shooting method is applied to a terminal, and is characterized by comprising the following steps:
receiving a first input of a user, wherein the first input is used for triggering a terminal to acquire an image within a target exposure duration;
responding to the first input, and displaying a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images;
after a first time period, updating the first target image displayed in the first area into a second target image;
wherein the second target image is an image synthesized from the first target image and M second images that have been acquired, the M second images are images acquired within the first time period, the first time period is less than the target exposure time period, and N, M are positive integers;
after receiving the first input of the user, the method further comprises:
displaying a first preview image in a second area in response to the first input;
after a third time length, updating the first preview image displayed in the second area into a second preview image;
wherein the third duration is less than or equal to the first duration.
2. The method of claim 1, wherein after receiving the first input from the user, the method further comprises:
receiving a second input of the user;
in response to the second input, stopping capturing an image and displaying a third target image in the first area;
the third target image is an image synthesized according to all the acquired images within a second time length, the second time length is the time length from the moment of starting to acquire the images to the moment of stopping to acquire the images, the second time length is less than or equal to the target exposure time length, and the first time length is less than or equal to the second time length.
3. The method of claim 1, wherein the first area includes a progress bar on which an exposure time is displayed;
after receiving the first input of the user, the method further comprises:
receiving a third input of a user on the progress bar;
in response to the third input, updating the image displayed in the first area to a fourth target image and updating the image displayed in the second area to a third image;
the fourth target image is an image synthesized from a starting time to a first time, and the first time is an exposure time corresponding to the third input on the progress bar.
4. The method of claim 3, wherein after receiving a third input by a user on the progress bar, the method further comprises:
displaying at least one option in the first area in response to the third input;
wherein each option is respectively used for indicating a processing operation of the image displayed by the first area or the second area, and the processing operation comprises at least one of the following operations: deleting the preview image currently displayed in the second area, deleting at least one preview image acquired within the target time length, deleting one preview image acquired at a preset time node, and storing the image currently displayed in the first area.
5. The method of claim 1, wherein the first area includes a progress bar on which an exposure time is displayed;
after receiving the first input of the user, the method further comprises:
detecting an exposure amount of the synthesized image displayed in the first region;
under the condition that the exposure amount is greater than or equal to a first exposure threshold value or the condition that the exposure duration is greater than a fourth duration and the exposure amount is less than or equal to a second exposure threshold value, displaying a target mark at a target position on the progress bar;
the target mark is used for indicating that the synthesized image is abnormal, and the target position indicates the exposure time corresponding to the abnormal image.
6. The method of claim 1, wherein the first area and the second area are areas on the same display screen;
or the first area and the second area are areas on different display screens.
7. A terminal is characterized by comprising a receiving module and a display module;
the receiving module is used for receiving a first input of a user, wherein the first input is used for triggering the terminal to acquire an image within a target exposure duration;
the display module is used for responding to the first input received by the receiving module, and displaying a first target image in a first area, wherein the first target image is an image synthesized according to the acquired N first images; after a first time period, updating the first target image displayed in the first area into a second target image; wherein the second target image is an image obtained by combining the first target image and M second acquired images, the M second images are acquired within the first time period, the first time period is less than the target exposure time period, and N, M are positive integers;
the display module is further configured to:
after the receiving module receives a first input of a user, responding to the first input, and displaying a first preview image in a second area;
after a third time length, updating the first preview image displayed in the second area into a second preview image;
wherein the third duration is less than or equal to the first duration.
8. The terminal of claim 7, further comprising an acquisition module;
the receiving module is further used for receiving a second input of the user after receiving the first input of the user;
the acquisition module is used for responding to the second input received by the receiving module and stopping acquiring the image;
the display module is further used for displaying a third target image in the first area;
the third target image is an image synthesized according to all the acquired images within a second time length, the second time length is the time length from the moment of starting to acquire the images to the moment of stopping to acquire the images, the second time length is less than or equal to the target exposure time length, and the first time length is less than or equal to the second time length.
9. The terminal of claim 7, wherein the first area includes a progress bar on which an exposure time is displayed;
the receiving module is further used for receiving a third input of the user on the progress bar after receiving the first input of the user;
the display module is further configured to update the image displayed in the first area to a fourth target image and update the image displayed in the second area to a third image in response to the third input received by the receiving module;
the fourth target image is an image synthesized from a starting time to a first time, and the first time is an exposure time corresponding to the third input on the progress bar.
10. The terminal of claim 9, wherein the display module is further configured to:
after the receiving module receives a third input of the user on the progress bar, responding to the third input, and displaying at least one option in the first area;
wherein each option is respectively used for indicating a processing operation of the image displayed by the first area or the second area, and the processing operation comprises at least one of the following operations: deleting the preview image currently displayed in the second area, deleting at least one preview image acquired within the target time length, deleting one preview image acquired at a preset time node, and storing the image currently displayed in the first area.
11. The terminal of claim 7, wherein the first area includes a progress bar on which an exposure time is displayed; the terminal also comprises a detection module;
the detection module is used for detecting the exposure of the synthesized image displayed in the first area after the receiving module receives the first input of the user;
the display module is further configured to display a target mark at a target position on the progress bar when the exposure amount detected by the detection module is greater than or equal to a first exposure threshold, or when the exposure duration is greater than a fourth duration and the exposure amount is less than or equal to a second exposure threshold; the target mark is used for indicating that the synthesized image is abnormal, and the target position indicates the exposure time corresponding to the image at which the abnormality starts to appear.
12. The terminal of claim 7, wherein the first area and the second area are areas on a same display screen; alternatively, the first and second electrodes may be,
the first area and the second area are areas on different display screens.
13. A terminal, characterized in that the terminal comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the long-exposure image capturing method according to any one of claims 1 to 6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the long-exposure image capturing method according to any one of claims 1 to 6.
CN201811133549.2A 2018-09-27 2018-09-27 Long exposure image shooting method and terminal Active CN109120853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811133549.2A CN109120853B (en) 2018-09-27 2018-09-27 Long exposure image shooting method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811133549.2A CN109120853B (en) 2018-09-27 2018-09-27 Long exposure image shooting method and terminal

Publications (2)

Publication Number Publication Date
CN109120853A CN109120853A (en) 2019-01-01
CN109120853B true CN109120853B (en) 2020-09-08

Family

ID=64856871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811133549.2A Active CN109120853B (en) 2018-09-27 2018-09-27 Long exposure image shooting method and terminal

Country Status (1)

Country Link
CN (1) CN109120853B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111567033A (en) * 2019-05-15 2020-08-21 深圳市大疆创新科技有限公司 Shooting device, unmanned aerial vehicle, control terminal and shooting method
CN110111633B (en) * 2019-05-23 2022-01-07 中国人民解放军海军航空大学青岛校区 Shutter simulation control method
CN110971832A (en) * 2019-12-20 2020-04-07 维沃移动通信有限公司 Image shooting method and electronic equipment
CN111107267A (en) * 2019-12-30 2020-05-05 广州华多网络科技有限公司 Image processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103384310A (en) * 2013-07-09 2013-11-06 华晶科技股份有限公司 Image acquisition device and image acquisition method
CN103888683A (en) * 2014-03-24 2014-06-25 深圳市中兴移动通信有限公司 Mobile terminal and shooting method thereof
CN105187711A (en) * 2014-03-24 2015-12-23 努比亚技术有限公司 Mobile terminal and photographing method thereof
CN105657247A (en) * 2015-11-20 2016-06-08 乐视移动智能信息技术(北京)有限公司 Secondary exposure photographing method and apparatus for electronic device
CN107734269A (en) * 2017-10-16 2018-02-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4390967B2 (en) * 2000-04-21 2009-12-24 富士フイルム株式会社 Electronic camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103384310A (en) * 2013-07-09 2013-11-06 华晶科技股份有限公司 Image acquisition device and image acquisition method
CN103888683A (en) * 2014-03-24 2014-06-25 深圳市中兴移动通信有限公司 Mobile terminal and shooting method thereof
CN105187711A (en) * 2014-03-24 2015-12-23 努比亚技术有限公司 Mobile terminal and photographing method thereof
CN105657247A (en) * 2015-11-20 2016-06-08 乐视移动智能信息技术(北京)有限公司 Secondary exposure photographing method and apparatus for electronic device
CN107734269A (en) * 2017-10-16 2018-02-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Also Published As

Publication number Publication date
CN109120853A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN108668083B (en) Photographing method and terminal
CN109639970B (en) Shooting method and terminal equipment
CN109361869B (en) Shooting method and terminal
CN110557566B (en) Video shooting method and electronic equipment
CN109495711B (en) Video call processing method, sending terminal, receiving terminal and electronic equipment
CN108471498B (en) Shooting preview method and terminal
CN108495029B (en) Photographing method and mobile terminal
CN109120853B (en) Long exposure image shooting method and terminal
CN109525874B (en) Screen capturing method and terminal equipment
CN110505400B (en) Preview image display adjustment method and terminal
CN111263071B (en) Shooting method and electronic equipment
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN109102555B (en) Image editing method and terminal
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
CN111147779B (en) Video production method, electronic device, and medium
CN108108079B (en) Icon display processing method and mobile terminal
CN111669503A (en) Photographing method and device, electronic equipment and medium
CN109413333B (en) Display control method and terminal
CN110798621A (en) Image processing method and electronic equipment
CN111597370A (en) Shooting method and electronic equipment
CN108132749B (en) Image editing method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN111050069B (en) Shooting method and electronic equipment
CN111083374B (en) Filter adding method and electronic equipment
CN108924413B (en) Shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant