WO2022161383A1 - 拍摄控制方法、装置和电子设备 - Google Patents

拍摄控制方法、装置和电子设备 Download PDF

Info

Publication number
WO2022161383A1
WO2022161383A1 PCT/CN2022/073935 CN2022073935W WO2022161383A1 WO 2022161383 A1 WO2022161383 A1 WO 2022161383A1 CN 2022073935 W CN2022073935 W CN 2022073935W WO 2022161383 A1 WO2022161383 A1 WO 2022161383A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
track
target
control
light
Prior art date
Application number
PCT/CN2022/073935
Other languages
English (en)
French (fr)
Inventor
张弘
李志坚
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to JP2023545373A priority Critical patent/JP2024504744A/ja
Priority to EP22745257.0A priority patent/EP4287611A1/en
Publication of WO2022161383A1 publication Critical patent/WO2022161383A1/zh
Priority to US18/227,883 priority patent/US20230412913A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B30/00Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present application belongs to the field of communication technologies, and in particular relates to a photographing control method, device and electronic device.
  • Streamer shutter mode which can capture the movement trajectory of light, shadow and flowing water according to different light and subjects.
  • the streamer shutter mode is generally used to shoot specific people and regular dynamic scenes, such as stars, waterfalls, streams and road vehicles, etc., and shooting in the streamer shutter mode can produce strip-like dynamic effects.
  • Streamer shutter video is to record the entire shooting process of streamer shutter mode as a video.
  • the purpose of the embodiments of the present application is to provide a shooting control method, device and electronic device, which can solve the problem that the shooting form is relatively simple due to the fixed light track generation parameters in the process of shooting a streamer shutter video.
  • an embodiment of the present application provides a shooting control method, the method includes: during the process of shooting a target video, displaying a first interface, where the target video is a video of the generation process of a light track, and the first interface includes a video preview a window and at least one first control; receiving a first input from a user to a first target control, where the first target control is a control in the at least one first control; adjusting the light corresponding to the first target control in response to the first input track generation parameters, and based on the adjusted track generation parameters, the image in the video preview window is updated.
  • an embodiment of the present application provides a shooting control device, the device includes: a display module, a receiving module, and an update module; the display module is used for displaying a first interface in the process of shooting a target video, the target video The video is a video of the generation process of the light track, and the first interface includes a video preview window and at least one first control; the receiving module is used to receive the first input from the user to the first target control, and the first target control is the at least one first control.
  • a control in a control; the updating module is used to adjust the track generation parameters corresponding to the first target control in response to the first input, and update the image in the video preview window based on the adjusted track generation parameters.
  • embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the method described.
  • a first interface can be displayed, where the target video is a video of the generation process of a light track, and the first interface includes a video preview window and at least one first control;
  • the first input of the first target control, the first target control is the control in the at least one first control; in response to the first input, adjusting the light track generation parameter corresponding to the first target control, and based on the adjusted light track Generate parameters to update the image in the video preview window.
  • the user in the process of shooting a video of the generation process of the light track, the user can pass the first target control (controls in at least one first control, each first control is used to adjust the generation parameters of the light track).
  • the user can change the generation parameters in the light track generation process (that is, adjust the shooting form) through input, so as to diversify the shooting form. , so as to solve the problem of fixed optical track generation parameters and relatively single shooting form in the related art.
  • FIG. 1 is a flowchart of a shooting control method provided by an embodiment of the present application.
  • FIG. 2 is one of the interface schematic diagrams of the shooting control method provided by the embodiment of the present application.
  • FIG 3 is the second interface schematic diagram of the shooting control method provided by the embodiment of the present application.
  • FIG. 4 is the third interface schematic diagram of the shooting control method provided by the embodiment of the present application.
  • FIG. 5 is the fourth interface schematic diagram of the shooting control method provided by the embodiment of the present application.
  • FIG. 6 is the fifth schematic diagram of the interface of the shooting control method provided by the embodiment of the present application.
  • FIG. 7 is the sixth schematic diagram of the interface of the shooting control method provided by the embodiment of the present application.
  • FIG. 8 is the seventh schematic diagram of the interface of the shooting control method provided by the embodiment of the present application.
  • FIG. 9 is the eighth schematic diagram of the interface of the shooting control method provided by the embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a shooting control device provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between “first”, “second”, etc.
  • the objects are usually of one type, and the number of objects is not limited.
  • the first object may be one or more than one.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
  • words such as “exemplarily” or “for example” are used to represent examples, illustrations or illustrations. Any embodiment or design described in the embodiments of the present application as “exemplarily” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplarily” or “such as” is intended to present the related concepts in a specific manner.
  • plural refers to two or more, for example, a plurality of processing units refers to two or more processing units; a plurality of elements Refers to two or more elements, etc.
  • the shooting control method provided by the embodiment of the present application can be specifically applied to the scene of shooting the video of the generation process of the light track (shooting the streamer shutter video).
  • the user in the process of shooting the video of the generation process of the light track, the user can The first input of the first target control (controls in at least one first control, each first control is used to adjust the generation parameters of the light track), adjust the generation parameters of the light track, update the image in the video preview window, so,
  • the user can change the generation parameters in the light track generation process (ie, adjust the shooting form) through input, so as to diversify the shooting form, so as to solve the problem of fixed light track generation parameters and relatively single shooting form in the related art.
  • an embodiment of the present application provides a photographing control method, and the following takes the execution subject as an electronic device as an example to illustrate the photographing control method provided by the embodiment of the present application.
  • the method may include steps 201 to 204 described below.
  • Step 201 the electronic device displays a first interface during the process of shooting the target video.
  • the target video is a video of the generation process of the light track
  • the first interface includes a video preview window and at least one first control, and each first control is used to adjust the generation parameters of the light track.
  • the video preview window is used to display a preview image (also referred to as a preview screen) captured in real time during the shooting of the target video.
  • each first control may be a control in the form of a progress bar, a control in the form of a button, or a control in other forms, which is not limited in the embodiment of the present application.
  • the generation parameters of the optical tracks may include at least one of the following: the number of optical tracks generated at the same time, and the generation speed of the optical tracks in the generation state. Other generation parameters may also be included, which are not limited in this embodiment of the present application.
  • the at least one control includes at least one of the following: a quantity control, a speed control; wherein, the quantity control is used to adjust the number of light tracks generated at the same time, and the speed control is used to adjust the generation speed of the light tracks in the generation state .
  • the above quantity control may also be referred to as a “simultaneous generation quantity control”, and the above speed control may also be referred to as a “single generation speed control”, which is not limited in this embodiment of the present application.
  • the generation parameters of the light tracks are not controlled, the number of light tracks generated at the same time is not limited, because the number of light tracks generated at the same time is determined by the actual shooting scene.
  • the generation speed of a single track is controlled by controlling the frame rate of the target video.
  • Step 202 The electronic device receives a first input from the user on the first target control.
  • the first target control is a control in the at least one first control.
  • the first input may be the user's click input on the first target control, or may be the user's drag input on the first target control, or may be other feasible inputs, which may be determined according to actual usage conditions.
  • the embodiments of the present application are not limited.
  • the above-mentioned click input can be a click input of any number of times, such as a single-click input, a double-click input, etc., or a short-press input or a long-press input, etc.; the above-mentioned drag input can be a drag input in any direction, such as upward. drag input, drag input down, drag input left, or drag input right, etc.
  • Step 203 The electronic device adjusts the light track generation parameter corresponding to the first target control in response to the first input.
  • the electronic device adjusts the number of light tracks generated at the same time in response to the first input. If the first target control is a speed control, the electronic device adjusts the generation speed of the light track in response to the first input.
  • Step 204 The electronic device updates the image in the video preview window based on the adjusted optical track generation parameters.
  • the user opens the camera application, clicks the "Streamer shutter mode” option, and enters the streamer shutter video shooting mode, as shown in FIG. 2 , the interface (ie, the first interface) of the streamer shutter video shooting mode is displayed, marked with "1" Indicates the video preview window, the preview image currently captured is displayed in the video preview window; the mark “2" indicates the quantity control, the current quantity control is “unlimited”; the mark “3” indicates the speed control, the current speed control is "50%” .
  • the electronic device shoots a streamer shutter video (target video) according to the light track generation parameters specified by the user.
  • the electronic device will display the processed image in the video preview window after correspondingly processing the captured image.
  • a first interface can be displayed, where the target video is a video of the generation process of a light track, and the first interface includes a video preview window and at least one first control;
  • the first input of the first target control, the first target control is the control in the at least one first control; in response to the first input, adjusting the light track generation parameter corresponding to the first target control, and based on the adjusted light track Generate parameters to update the image in the video preview window.
  • the user in the process of shooting a video of the generation process of the light track, the user can pass the first target control (controls in at least one first control, each first control is used to adjust the generation parameters of the light track).
  • the user can change the generation parameters in the light track generation process (that is, adjust the shooting form) through input, so as to diversify the shooting form. , so as to solve the problem of fixed optical track generation parameters and relatively single shooting form in the related art.
  • the first target control is a quantity control in the at least one first control
  • the light track generation parameter is the number of light tracks generated at the same time
  • Step 204a Based on the first input, the electronic device updates the first image in the video preview window to the second image.
  • the electronic device updates the first image in the video preview window to the third image in the case of not receiving the first input.
  • the length of N tracks in the second image increases (that is, the number of tracks generated at the same time is N); That is to say, the number of tracks generated at the same time is M); both N and M are positive integers, and N and M are different.
  • the number of light tracks simultaneously generated in the video clips captured within a period of time is M
  • the captured video clips The number of tracks simultaneously generated in M is M (eg, the third image).
  • the first input is the input of the user through the quantity control to trigger the electronic device to adjust the number of light tracks simultaneously generated in the subsequently shot video clip from M to N.
  • the electronic device acquires the second image in response to the first input, and updates the first image in the video preview window to the second image.
  • the second image in addition to the N optical tracks, may also include at least one optical track whose length stops increasing, and in the third image, in addition to the M optical tracks, at least one optical track whose length stops increasing
  • the added optical track is not limited in this embodiment of the present application.
  • the N optical tracks may be completely different from the M optical tracks; they may also be partially the same and partially different; if N is less than M, the N optical tracks may be the M optical tracks If N is greater than M, the N optical tracks may include the M optical tracks; the details may be determined according to actual conditions, which are not limited in the embodiments of the present application.
  • the user can input (for example, specify the input of the light tracks to be those light tracks, or specify the input of the light track of which area in the field of view for shooting, etc., the embodiment of the present application, etc. without limitation) specifies which tracks in the second image the N tracks are.
  • the user adjusts the number of light tracks simultaneously generated in the video clips within a period of time through the first input, so that the shooting forms can be diversified.
  • the generated image of the optical track obtained by the actual shooting does not change.
  • an image with changed generation parameters hereinafter referred to as a processed image
  • the processed image is displayed in the video preview window, and a target video is obtained based on the processed image.
  • the photographing control method provided by this embodiment of the present application may further include the following steps 10 to 13.
  • Step 10 The electronic device acquires an image 1 (the image 1 is an image generated by an actual captured light track).
  • Step 11 The electronic device performs image segmentation processing on the image 1 to obtain S light track images 1, and each light track image 1 is: relative to the previous frame image (the light track image in the actual scene) corresponding to the actual shooting.
  • Track an image of a track with increasing track length (the track in image 1), S is a positive integer greater than or equal to P, and P is the greater of M and N.
  • Step 12 The electronic device performs image segmentation processing on the first image to obtain an intermediate image 1.
  • the intermediate image 1 is: an image after dividing N track images 2 from the first image, and each track image 2 is: the first image.
  • Step 13 The electronic device performs image synthesis processing on the intermediate image 1 and the N track images 3 to obtain a second image.
  • each light track image 3 in the N light track images 3 is: the light track image 1 corresponding to the light track 1 in the S light track images 1 and the previously saved light track corresponding to the light track 1.
  • the track image with the shortest track length if there is a previously saved track image in the N track images 3, after step 13, delete the saved track image from the storage area, wherein the track 1 is one of the N light tracks, and different light track images 3 correspond to different ones of the N light tracks.
  • each light track image 3 in the N light track images 3 is: the light track image 1 corresponding to the light track 1 in the S light track images 1 and the previously saved light track corresponding to the light track 1. Image set, any track image.
  • the light track 1 is the light track corresponding to each light track image 3 among the N light tracks. That is to say, the light track 1 is one light track among the N light tracks, and different light track images 3 correspond to different light tracks among the N light tracks.
  • each track whose generation parameters are controlled there is a corresponding track image set.
  • there may be a track image set (the track image set is empty), or there may be no track image set.
  • a light track image set may be initially generated for the shooting target video, or may be generated only when the corresponding light track is controlled to generate parameters, which is not limited in this embodiment of the present application.
  • the track image set may be a set or not a set (just a separate storage area), which is not limited in this embodiment of the present application.
  • the photographing control method provided by this embodiment of the present application may further include the following step 14.
  • Step 14 the electronic device saves the track images 1 of the S track images 1 except the track images 1 that are the same as the track images 1 of the N track images 3 .
  • the shooting scene targeted by the target video actually includes 4 light tracks, namely light track a, light track b, light track c, and light track d, that is to say, when the shooting provided by the embodiments of the present application is not implemented,
  • the target video is the generation process video of 4 tracks.
  • the user receives input 2 to increase the number of simultaneously generated light tracks from 2 to 3 (the quantity control is adjusted from 2 to 3), in the next video clip 3, 2 light tracks continue to be generated at the same time (track a and track b), 1 track (track c) is changed from stop generation to continuous generation, specifically, acquire the shortest track image from the track image set c frame by frame (then from the track image Set c to delete the track image) to replace the track image corresponding to track c in the actually captured frame-by-frame track generated image (save the replaced track image to track image set c) (it can also be said that , the frame-by-frame shortest track image x obtained from the track image set c is swapped with the track image y corresponding to the track c in the actually captured frame-by-frame track generated image), and one track continues to stop generating ( Light track d), in the frame-by-frame light track generated image actually captured in the process, the light track image corresponding to the light track d is segmente
  • the electronic device executes the above step 10 by looping Going to step 14, the video segment after the first input (in the target video) can be obtained.
  • an image (third image) with a changed generation parameter may be obtained by performing processing corresponding to the first input on an image (third image) obtained without receiving the first input.
  • the processed image, the second image), and the processed image is displayed in the video preview window, and the target video is obtained based on the processed image.
  • the shooting control method provided by the embodiment of the present application may further include the following steps 204b to 204c.
  • Step 204b the electronic device acquires a third image.
  • Step 204c the electronic device performs target processing on the third image to obtain the second image.
  • the target processing is the processing corresponding to the first input.
  • the target processing includes image segmentation processing, image synthesis processing, and may also include other image processing methods, which may be determined according to actual usage requirements, which are not limited in the embodiments of the present application.
  • the third image is obtained by the electronic device to generate an image based on the actual light track of the shooting scene; If the electronic device has received other inputs to any of the first controls, the third image may be the image generated by the above-mentioned actual light track, or may be an image after the image processing (processing corresponding to the above-mentioned other inputs) is performed on the above-mentioned actual light track. .
  • the electronic device after the first input, if no other input (input to any first control, input to stop the target video shooting) is received, the electronic device performs the above step 204b by looping Go to step 204c to obtain the video segment after the first input (in the target video).
  • the second image is obtained, so that the light track generation image required by the user can be obtained, and then the light track generation video required by the user can be obtained.
  • the M optical tracks include the N optical tracks (that is, controlling a part of the optical tracks that were originally generated at the same time to stop generating)
  • the above step 204c can be specifically performed by the following: Steps 101 to 103 are implemented.
  • Step 101 The electronic device performs image segmentation processing on the third image to obtain a first intermediate image.
  • the first intermediate image is: the image after dividing M-N first track images from the third image, and each first track image is: the M tracks except the N tracks. An image of a light track.
  • Step 102 The electronic device performs image segmentation processing on the first image to obtain M-N second track images.
  • each second light track image is: an image of one light track in the M light tracks except the N light tracks.
  • Step 103 The electronic device performs image synthesis processing on the first intermediate image and the M-N second track images to obtain a second image.
  • the above-mentioned step 101 may be specifically implemented by the following step 101a, and after the step 101a, the shooting control method provided by this embodiment of the present application may further include the following step 104.
  • Step 101a the electronic device performs image segmentation processing on the third image to obtain a first intermediate image and M-N first track images.
  • Step 104 the electronic device saves the M-N first track images.
  • the received user adjusts the quantity control to reduce the number of tracks generated at the same time.
  • the electronic device responds to the input and converts the reduced (actual) light track (M) being generated.
  • the images of the tracks except the N tracks) are extracted by image segmentation technology (as shown in Figure 3, the background and foreground of the image can be separated by image segmentation technology, or the foreground
  • the track 1 and track 2 are separated.
  • the track 2 is divided and saved), and then saved frame by frame for subsequent processing.
  • the reduced (actually) is being The generated light track stops generating.
  • the user reduces the number of light tracks generated at the same time. If the actual number of light tracks being generated at this time is greater than the number of light tracks generated at the same time set by the user, the image displayed in the video preview window only includes the simultaneous light tracks set by the user. The generated light track.
  • the images of the extra tracks can be segmented first and saved. Later, when the user sets some or all of the extra tracks to continue to be generated, the electronic device can save the previously saved tracks from the storage area.
  • the track images corresponding to the part or all of the tracks are taken out, synthesized into a preview image, and displayed in the video preview window to reflect the generation process of the part or all of the tracks.
  • the N optical tracks include the M optical tracks (that is, in the case that the original M optical tracks continue to be generated, add some optical tracks that were previously controlled to stop generating, and re- Recovery and generation), the above step 204c can be specifically implemented through the following steps 105 to 106.
  • Step 105 the electronic device acquires N-M third track images.
  • each of the third track images is: a track image with the shortest track length in a track image set corresponding to a first target track saved before the first input;
  • the target track is: one track among the N-M tracks other than the M tracks among the N tracks.
  • Step 106 The electronic device performs image synthesis processing on the third image and the N-M third track images to obtain the second image.
  • the third image may not include other light tracks except the M light track images, it may also include other light tracks. It can be understood as: the N-M tracks are not included in the third image except the M track images (the N-M tracks are not included in the third image), and the M track images are not included in the third image. It includes the N-M light tracks (the third image includes the N-M light tracks), and the third image includes a part of the N-M light tracks in addition to the M light track images (the third image includes the N-M light tracks). part of the N-M tracks).
  • the synthesizing process in step 106 may be to insert the third light track image corresponding to the light track not in the third image into the corresponding position in the third image;
  • the synthesis process in step 106 may be to replace the corresponding position in the third image with the third light track image corresponding to the light track not in the third image. Light track image.
  • the above-mentioned step 106 can also be specifically implemented by the following steps 106a to 106b.
  • Step 106a the electronic device performs image segmentation processing on the third image to obtain a second intermediate image.
  • the second intermediate image is: the image after dividing N-M fifth track images from the third image, and each fifth track image is: among the N tracks except the M tracks An image of a light track.
  • a fifth track image corresponds to a third track image
  • different fifth track images correspond to different third track images.
  • the length of the light track in each light track image in the light track image set is greater than the length of the light track in the corresponding fifth light track image. Because the fifth track image is an image of a track whose length has not increased in the third image, and the track images saved in this track image set are all tracks whose actual length has increased, but are not displayed in the video preview window. image.
  • Step 106b the electronic device performs image synthesis processing on the second intermediate image and the N-M third track images to obtain a second image.
  • steps 105 to 106 are to increase the number of simultaneously generated tracks after the user reduces the number of tracks generated at the same time and has a certain number of track images that are cached (the original control stops generating tracks in the track). part or all of the tracks are regenerated).
  • the real-time generated images of a part or all of the tracks can be segmented and saved, and then the corresponding track images saved before are extracted and synthesized into In the preview image, it is displayed in the video preview window, that is, a large number of previously cached track images will be displayed on the video preview window.
  • the first interface further includes at least one second control, and each second control is used to determine the distribution pattern of the N-M third track images.
  • the shooting control method provided by the embodiment of the present application is used.
  • the following steps 107 to 108 may also be included, and the above-mentioned step 105 may be specifically implemented by the following step 105a.
  • Step 107 The electronic device receives a second input from the user on the second target control.
  • the second target control is a control in the at least one second control.
  • the at least one second control includes at least one of the following: a single-point diffusion distribution control, a multi-point diffusion distribution control, a uniform distribution control, and a random distribution control.
  • the at least one second control includes: a single-point diffusion distribution control, a multi-point diffusion distribution control, a uniform distribution control, and a random distribution control, respectively corresponding to the “single-point diffusion” indicated by the mark “4” in FIG. "Multipoint Diffusion", “Uniform”, “Random”.
  • the single-point diffusion distribution control is used to control the traces generated by the restoration to be distributed around a central point (that is, the optical traces close to the central point are preferentially selected. Resume generation, then select the light track far from the center point to resume generation).
  • the multi-point diffusion distribution control is used to control the traces to be restored and generated to be distributed around multiple center points respectively (that is, the traces close to any two center points are preferentially selected to be restored and generated, and then Select any light track far away from the two center points to resume generation).
  • the uniform distribution control is used to control that the traces generated by the restoration are evenly distributed in the image, and do not focus on any one or more points of distribution.
  • the random distribution control is used to control that the optical tracks generated by restoration are randomly distributed in the image, and the distribution form is not limited.
  • Step 108 The electronic device determines, in response to the second input, the distribution pattern of the N-M third track images as the target pattern.
  • Step 105a in response to the second input, the electronic device acquires N-M fourth track images whose distribution pattern is the target pattern from the K fourth track images, as the N-M third track images.
  • each fourth track image is: the track image with the shortest track length in the track image set corresponding to a second target track saved before the first input, and a second target track is divided by M A track other than the track, K is a positive integer greater than N-M.
  • the electronic device controls to restore the generated light track to spread out around the user-specified center point. . If the user does not select any of the second controls, the electronic devices may be distributed in the original order (the order of the occurrence time of the light tracks).
  • At least one second control is added, and the user can select the corresponding second control according to his own needs, so that the restored light tracks are distributed according to the user's needs, thereby increasing the shooting form.
  • the first target control is a speed control in the at least one first control
  • the light track generation parameter is the generation speed of the light track
  • the speed control is used to adjust the generation speed of the light track in the generation state; the above steps 204 can be specifically implemented through the following step 204d.
  • Step 204d the electronic device updates the image in the video preview window at the first frame rate within the first time period.
  • the start time of the first time period is the time when the first input is received
  • the end time of the first time period is the time when the target input is received
  • the target input is any one of the following: for the at least one first control The input of the controls in, stop shooting the input of the target video.
  • the first frame rate is determined based on the first input, and if the first input is not received, the image in the video preview window is updated at the second frame rate; the second frame rate is different from the first frame rate.
  • the user adjusts the frame rate of the video through the first input of the speed control, so that the generation speed of the light track can be changed.
  • the frame rate is lowered, the generation speed of the light track is decreased, and when the frame rate is increased, the generation speed of the light track is increased. In this way, the shooting form can be increased.
  • the shooting control method provided by this embodiment of the present application may further include the following step 205.
  • Step 205 the electronic device saves the first target image.
  • the first target image is an image that has not been updated in the video preview window within the first time period.
  • the electronic device can first save the first target image for use when the frame rate is increased later.
  • reducing the frame rate means slowing down the generation speed of the optical track.
  • the electronic device may save the obtained preview image, and then display it in the video preview window at the speed of F*(X%) frames per second.
  • X% is the value of the speed control set by the user
  • F frames per second is the frame rate before the first input.
  • the shooting control method provided by this embodiment of the present application may further include the following step 206, the above step: 204d can be specifically implemented through the following step 204d1.
  • Step 206 The electronic device acquires a second target image, where the second target image is an image saved before the first input and not updated in the video preview window.
  • Step 204d1 the electronic device updates the image in the video preview window at the first frame rate based on the second target image and the third target image within the first time period.
  • the third target image is: an image generated within the first time period.
  • this step can speed up the generation speed of the optical track by increasing the frame rate, and even make the generation speed of the optical track higher than Actual build speed (higher than F frames per second).
  • the electronic device saves the currently generated image frame by frame, and then replays the previously buffered image frame by frame in the video preview window at the speed of F*(Y%) frames per second. show.
  • Y% is the value of the speed control set by the user.
  • the user can also click the "stop shooting” button to end the shooting and generate the target video.
  • the user can manually stop the video recording, and the electronic device saves the recorded video locally for the user to view.
  • the user in the process of shooting the video of the light track generation process, in the streamer shutter video mode of the electronic device camera, the user can adjust the generation speed of the light track by inputting the speed control, so as to delay the generation of the light track or to speed up generation.
  • the execution subject may be a shooting control device, or a functional module and/or functional entity in the shooting control device for executing the shooting control method.
  • the device for the photographing control method provided by the embodiment of the present application is described by taking the photographing control device executing the photographing control method as an example.
  • FIG. 10 shows a possible schematic structural diagram of the photographing control apparatus involved in the embodiment of the present application.
  • the shooting control device 300 may include: a display module 301, a receiving module 302, an adjustment module 303 and an update module 304;
  • the display module 301 is used to display a first interface during the process of shooting a target video,
  • the target video is the generation process video of the light track, and the first interface includes a video preview window and at least one first control;
  • the receiving module 302 is used for receiving the first input from the user to the first target control, and the first target control is the A control in at least one first control;
  • the adjustment module 303 is used to adjust the light track generation parameter corresponding to the first target control in response to the first input received by the receiving module;
  • the update module 304 is used to to update the image in the video preview window.
  • the first target control is a quantity control in the at least one first control
  • the light track generation parameter is the number of light tracks generated at the same time
  • the The update module 304 is specifically used to update the first image in the video preview window to the second image; wherein, in the case of not receiving the first input, the update module 304 is used to update the first image in the video preview window.
  • An image is updated to a third image; compared with the first image, the lengths of N tracks in the second image are increased; compared with the first image, the lengths of M tracks in the third image are increased; is a positive integer, and N and M are different.
  • the shooting control device 300 further includes: an acquisition module 305 and an image processing module 306; the acquisition module 305 is configured to, before the update module 304 updates the first image in the video preview window to the second image, Acquire a third image; the image processing module 306 is configured to perform target processing on the third image to obtain a second image.
  • the M optical tracks include the N optical tracks
  • the image processing module 306 is specifically configured to perform image segmentation processing on the third image to obtain a first intermediate image, the first intermediate image.
  • the intermediate image is: the image after dividing M-N first track images from the third image, and each first track image is: the image of one track except the N tracks among the M tracks. image; image segmentation processing is performed on the first image to obtain M-N second light track images, and each second light track image is: an image of one light track except the N light tracks in the M light tracks; Perform image synthesis processing on the first intermediate image and the M-N second track images to obtain a second image.
  • the shooting control device 300 further includes: a saving module 307; the image processing module 306, which is specifically configured to perform image segmentation processing on the third image to obtain a first intermediate image and M-N first track images; the saving Module 307, configured to save the M-N first track images.
  • the N light tracks include the M light tracks
  • the image processing module 306 is used to obtain N-M third light track images
  • the three-track images are subjected to image synthesis processing to obtain a second image; wherein, each third track image is: a collection of track images corresponding to a first target track saved before the first input, with the shortest track length
  • the track image of the first target track is: one track among the N-M tracks other than the M tracks among the N tracks.
  • the photographing control device 300 device further includes: a determining module 308; the first interface further includes at least one second control, and the receiving module 302 is further configured to receive the N-M third track images before acquiring the N-M third track images.
  • the second input by the user to the second target control, the second target control is a control in the at least one second control; the determining module 308 is configured to determine the N-M third thirds in response to the second input received by the receiving module
  • the distribution pattern of the light track images is the target pattern;
  • the acquisition module 305 is specifically configured to obtain N-M fourth light track images whose distribution pattern is the target pattern from the K fourth light track images, as the N-M third light tracks image; wherein, each fourth track image is: the track image with the shortest track length in the track image set corresponding to a second target track saved before the first input, the second target track is an optical track other than the M optical tracks, and K is a positive integer greater than N-M.
  • the at least one second control includes at least one of the following: a single-point diffusion distribution control, a multi-point diffusion distribution control, a uniform distribution control, and a random distribution control.
  • the first target control is a speed control in the at least one first control
  • the light track generation parameter is the generation speed of the light track
  • the frame rate updates the image in the video preview window; wherein, the starting moment of the first time period is the moment when the first input is received, and the end moment of the first time period is the moment when the target input is received, and the target input is the following Any item: the input of the control in the at least one first control, stop shooting the input of the target video; under the situation that the first input is not received, update the image in the video preview window with the second frame rate; The second frame rate is different from the first frame rate.
  • the photographing control device 300 further includes: a saving module 307; the saving module 307 is configured to, in the case that the first frame rate is less than the second frame rate, use the first frame rate in the first time period After updating the image in the video preview window, the first target image is saved, and the first target image is an image that has not been updated in the video preview window within the first time period.
  • the photographing control device 300 further includes: an obtaining module 305; the obtaining module 305 is configured to, in the case that the first frame rate is greater than the second frame rate, use the first frame rate in the first time period Before updating the image in the video preview window, a second target image is obtained, and the second target image is: before the first input, the saved image that is not updated in the video preview window; the updating module 304 is specifically used in the first input.
  • the image in the video preview window is updated at the first frame rate; wherein, the third target image is an image generated in the first period of time.
  • the modules that must be included in the shooting control device 300 are indicated by solid line frames, such as a display module 301 , a receiving module 302 , an adjustment module 303 and an update module 304 ; in the shooting control device 300 , Modules that may or may not be included are indicated by dashed boxes, such as acquiring module 305 , image processing module 306 , saving module 307 and determining module 308 .
  • An embodiment of the present application provides a shooting control device, which can display a first interface during the process of shooting a target video, where the target video is a video of the generation process of a light track, and the first interface includes a video preview window and at least one first control ; Receive the first input of the user to the first target control, the first target control is the control in the at least one first control; In response to the first input, adjust the light track generation parameter corresponding to the first target control, and based on the adjustment After the track generation parameters, update the image in the video preview window.
  • the user in the process of shooting a video of the generation process of the light track, the user can pass the first target control (controls in at least one first control, each first control is used to adjust the generation parameters of the light track).
  • the user can change the generation parameters in the light track generation process (that is, adjust the shooting form) through input, so as to diversify the shooting form. , so as to solve the problem of fixed optical track generation parameters and relatively single shooting form in the related art.
  • the photographing control apparatus in this embodiment of the present application may be an apparatus, or may be an electronic device or a component, an integrated circuit, or a chip in the electronic device.
  • the electronic device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the photographing control device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the photographing control device provided in the embodiment of the present application can implement the various processes implemented by the method embodiments in FIGS. 1 to 9 , and can achieve the same technical effect. To avoid repetition, details are not repeated here.
  • an embodiment of the present application further provides an electronic device 400, including a processor 401, a memory 402, a program or instruction stored in the memory 402 and executable on the processor 401,
  • the program or instruction is executed by the processor 401, each process of the above-mentioned embodiment of the shooting control method can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 12 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 500 includes but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, and a processor 510, etc. part.
  • the electronic device 500 may also include a power supply (such as a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 through a power management system, so as to manage charging, discharging, and power management through the power management system. consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 12 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .
  • the display unit 506 is used to display a first interface in the process of shooting the target video, the target video is the generation process video of the light track, and the first interface includes a video preview window and at least one first control;
  • the user input unit 507 for receiving a first input from a user to a first target control, where the first target control is a control in at least one first control;
  • the processor 510 is used to adjust the light track corresponding to the first target control in response to the first input Generate parameters, and update the image in the video preview window based on the adjusted track generation parameters.
  • the first target control is a quantity control in at least one first control
  • the light track generation parameter is the number of light tracks generated at the same time
  • the processor 510 is further configured to acquire a third image before updating the first image in the video preview window to the second image, and perform target processing on the third image to obtain the second image.
  • the M optical tracks include N optical tracks
  • the processor 510 is specifically configured to perform image segmentation processing on the third image to obtain a first intermediate image, where the first intermediate image is: The image after dividing M-N first track images from the third image, each first track image is: an image of one track except the N tracks among the M tracks; for the first image Perform image segmentation processing to obtain M-N second track images, each second track image is: an image of one track except the N tracks among the M tracks; for the first intermediate image and the M-N tracks The second track image is subjected to image synthesis processing to obtain a second image.
  • the processor 510 is specifically configured to perform image segmentation processing on the third image to obtain a first intermediate image and M-N first track images; and is further configured to save M-N first track images.
  • the N light tracks include M light tracks
  • the processor 510 is specifically configured to acquire N-M third light track images; for the third image and the N-M third light track images Perform image synthesis processing to obtain a second image; wherein, each third track image is: the track image with the shortest track length in the track image set corresponding to a first target track saved before the first input ;
  • a first target light track is: one light track in the N-M light track except the M light track among the N light tracks.
  • the first interface further includes at least one second control
  • the user input unit 507 is further configured to receive a second input from the user on the second target control before acquiring the N-M third track images.
  • the control is a control in at least one second control;
  • the processor 510 is further configured to, in response to the second input, determine that the distribution pattern of the N-M third track images is the target pattern;
  • N-M fourth track images whose distribution mode is the target mode are obtained as N-M third track images; wherein, each fourth track image is: saved before the first input, and a In the track image set corresponding to the two target tracks, the track image with the shortest track length, a second target track is one track except M tracks, and K is a positive integer greater than N-M.
  • the at least one second control includes at least one of the following: a single-point diffusion distribution control, a multi-point diffusion distribution control, a uniform distribution control, and a random distribution control.
  • the first target control is a speed control in at least one first control
  • the light track generation parameter is the generation speed of the light track
  • the processor 510 is specifically configured to update at the first frame rate within the first time period.
  • the input of the control in at least one first control stops the input of the shooting target video; when the first input is not received, the image in the video preview window is updated with the second frame rate; the second frame rate is the same as that of the first frame. rate is different.
  • the processor 510 is further configured to save the first target after updating the image in the video preview window at the first frame rate in the first time period when the first frame rate is less than the second frame rate.
  • image, the first target image is an image that has not been updated in the video preview window within the first time period.
  • the processor 510 is further configured to acquire the second target before updating the image in the video preview window at the first frame rate within the first time period when the first frame rate is greater than the second frame rate.
  • image the second target image is: before the first input, the saved image that is not updated in the video preview window; it is also specifically used for, in the first time period, based on the second target image and the third target image, with the first The frame rate updates the image in the video preview window; wherein, the third target image is: an image generated in the first time period.
  • the electronic device can display a first interface during the process of shooting a target video, where the target video is a video of the generation process of a light track, and the first interface includes a video preview window and at least one first control; receiving The first input by the user to the first target control, the first target control is the control in the at least one first control; in response to the first input, adjust the light track generation parameter corresponding to the first target control, and based on the adjusted Light track generation parameters, update the image in the video preview window.
  • the user in the process of shooting a video of the generation process of the light track, the user can pass the first target control (controls in at least one first control, each first control is used to adjust the generation parameters of the light track).
  • the user can change the generation parameters in the light track generation process (that is, adjust the shooting form) through input, so as to diversify the shooting form. , so as to solve the problem of fixed optical track generation parameters and relatively single shooting form in the related art.
  • the radio frequency unit 501 can be used for receiving and sending signals during sending and receiving information or during a call. Specifically, after receiving the downlink data from the base station, it is processed by the processor 510; The uplink data is sent to the base station.
  • the radio frequency unit 501 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides the user with wireless broadband Internet access through the network module 502, such as helping the user to send and receive emails, browse web pages, access streaming media, and the like.
  • the audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into audio signals and output as sound.
  • the audio output unit 503 may also provide audio output related to a specific function performed by the electronic device 500 (eg, call signal reception sound, message reception sound, etc.).
  • the input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, and the graphics processor 5041 is used for still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode data is processed.
  • the display unit 506 may include a display panel 5061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 507 includes a touch panel 5071 and other input devices 5072 .
  • the touch panel 5071 is also called a touch screen.
  • the touch panel 5071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 5072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which are not described herein again.
  • Memory 509 may be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • the processor 510 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 510.
  • the embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
  • a program or an instruction is stored on the readable storage medium.
  • the processor is the processor in the electronic device described in the foregoing embodiments.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above-mentioned embodiments of the shooting control method.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is configured to run a program or an instruction to implement the above-mentioned embodiments of the shooting control method.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种拍摄控制方法、装置和电子设备,该方法包括:在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。

Description

拍摄控制方法、装置和电子设备
相关申请的交叉引用
本申请主张在2021年01月28日在中国提交的中国专利申请号202110121815.5的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信技术领域,具体涉及一种拍摄控制方法、装置和电子设备。
背景技术
随着通信技术的高速发展,电子设备的拍摄功能越来越强大,例如,流光快门模式拍摄功能。
流光快门模式,可以根据不同的光线和拍摄对象,捕捉光影、流水的移动轨迹。流光快门模式一般用于拍摄特定的人、规律的动态场景,比如星空、瀑布、溪流及道路车辆等,而且流光快门模式下的拍摄可以产生条状的动态效果。流光快门视频即是把整个流光快门模式的拍摄过程记录成视频。
然而,由于拍摄流光快门视频的过程中,光轨生成参数固定,导致拍摄形式比较单一,无法满足用户需求。
发明内容
本申请实施例的目的是提供一种拍摄控制方法、装置和电子设备,能够解决由于拍摄流光快门视频的过程中,光轨生成参数固定,导致拍摄形式比较单一的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种拍摄控制方法,该方法包括:在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。
第二方面,本申请实施例提供了一种拍摄控制装置,该装置包括:显示模块、接收模块和更新模块;该显示模块,用于在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;该接收模块,用于接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;该更新模块,用于响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
在本申请实施例中,可以通过在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。该方案中,在拍摄光轨的生成过程视频的过程中,用户可以通过对第一目标控件(至少一个第一控件中的控件,每个第一控件用于调节光轨的生成参数)的第一输入,调节光轨的生成参数,更新视频预览窗中的图像,如此,在拍摄过程中,用户可以通过输入改变光轨生成过程中的生成参数(即调节拍摄形式),使拍摄形式多样化,从而可以解决相关技术光轨生成参数固定,拍摄形式比较单一的问题。
附图说明
图1是本申请实施例提供的一种拍摄控制方法的流程图;
图2是本申请实施例提供的拍摄控制方法的界面示意图之一;
图3是本申请实施例提供的拍摄控制方法的界面示意图之二;
图4是本申请实施例提供的拍摄控制方法的界面示意图之三;
图5是本申请实施例提供的拍摄控制方法的界面示意图之四;
图6是本申请实施例提供的拍摄控制方法的界面示意图之五;
图7是本申请实施例提供的拍摄控制方法的界面示意图之六;
图8是本申请实施例提供的拍摄控制方法的界面示意图之七;
图9是本申请实施例提供的拍摄控制方法的界面示意图之八;
图10是本申请实施例提供的一种拍摄控制装置的结构示意图;
图11是本申请实施例提供的一种电子设备的结构示意图;
图12是本申请实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或者两个以上,例如,多个处理单元是指两个或者两个以上的处理单元;多个元件是指两个或者两个以上的元件等。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的拍摄控制方 法、装置和电子设备进行详细地说明。
本申请实施例提供的拍摄控制方法具体可以应用于拍摄光轨的生成过程的视频(拍摄流光快门视频)的场景,该方案中,在拍摄光轨的生成过程视频的过程中,用户可以通过对第一目标控件(至少一个第一控件中的控件,每个第一控件用于调节光轨的生成参数)的第一输入,调节光轨的生成参数,更新视频预览窗中的图像,如此,在拍摄过程中,用户可以通过输入改变光轨生成过程中的生成参数(即调节拍摄形式),使拍摄形式多样化,从而可以解决相关技术光轨生成参数固定,拍摄形式比较单一的问题。
参考图1所示,本申请实施例提供了一种拍摄控制方法,下面以执行主体为电子设备为例,对本申请实施例提供的拍摄控制方法进行示例性地说明。该方法可以包括下述的步骤201至步骤204。
步骤201、电子设备在拍摄目标视频的过程中,显示第一界面。
其中,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件,每个第一控件用于调节光轨的生成参数。
可以理解,本申请实施例中,视频预览窗用于显示目标视频拍摄过程中实时拍摄到的预览图像(也可以称为预览画面)。
可以理解,本申请实施例中,每个第一控件可以是进度条形式的控件,也可以是按键形式的控件,还可以是其他形式的控件,本申请实施例不做限定。
可以理解,本申请实施例中,光轨的生成参数可以包括以下至少一项:同时生成的光轨数量,处于生成状态的光轨的生成速度。还可以包括其他的生成参数,本申请实施例不做限定。
可选地,该至少一个控件包括以下至少一项:数量控件,速度控件;其中,该数量控件用于调节同时生成的光轨数量,该速度控件用于调节处于生成状态的光轨的生成速度。
可以理解,上述数量控件也可以称为“同时生成数量控件”,上述速度控件也可以称为“单条生成速度控件”,本申请实施例不做限定。
可以理解,在拍摄目标视频的过程中,若不控制光轨的生成参数,则同时生成的光轨数量是不限定的,因为同时生成的光轨数量是由实际拍摄场景确定的。本申请实施例中,是通过控制目标视频的帧率来控制单条光轨的生成速度的。
步骤202、电子设备接收用户对第一目标控件的第一输入。
其中,第一目标控件为该至少一个第一控件中的控件。
可选地,第一输入可以为用户对第一目标控件的点击输入,也可以为用户对第一目标控件的拖动输入,还可以是其他的可行性输入,具体可以根据实际使用情况确定,本申请实施例不做限定。
示例性地,上述点击输入可以是单击输入、双击输入等任意次数的点击输入,也可以是短按输入或长按输入等;上述拖动输入可以是向任意方向的拖动输入,例如向上的拖动输入、向下的拖动输入、向左的拖动输入或向右的拖动输入等。
步骤203、电子设备响应于第一输入,调节与第一目标控件对应的光轨生成参数。
示例性地,若第一目标控件为数量控件,则电子设备响应于第一输入,调节同时生成的光轨数量。若第一目标控件为速度控件,则电子设备响应于第一输入,调节光轨的生成速度。
步骤204、电子设备基于调节后的光轨生成参数,更新该视频预览窗中的图像。
示例性地,用户打开相机应用程序,点击“流光快门模式”选项,进入流光快门视频拍摄模式,如图2所示,显示流光快门视频拍摄模式的界面(即第一界面),标记“1”指示视频预览窗,视频预览窗中显示当前拍摄得到的预览图像;标记“2”指示数量控件,当前数量控件是“无限制”;标记“3”指示速度控件,当前速度控件是“50%”。用户点 击“拍摄”选项后,电子设备根据用户指定的光轨生成参数,拍摄流光快门视频(目标视频)。如果数量控件和速度控件分别是无限制和100%(即用户未设置),则不做特殊处理,直接把拍摄得到的图像显示在视频预览窗中。如果用户通过数量控件和速度控件中的至少一个设置光轨的生成参数,则电子设备通过对拍摄得到的图像进行相应处理后,再将处理后的图像显示在视频预览窗中。
在本申请实施例中,可以通过在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。该方案中,在拍摄光轨的生成过程视频的过程中,用户可以通过对第一目标控件(至少一个第一控件中的控件,每个第一控件用于调节光轨的生成参数)的第一输入,调节光轨的生成参数,更新视频预览窗中的图像,如此,在拍摄过程中,用户可以通过输入改变光轨生成过程中的生成参数(即调节拍摄形式),使拍摄形式多样化,从而可以解决相关技术光轨生成参数固定,拍摄形式比较单一的问题。
可选地,第一目标控件为该至少一个第一控件中的数量控件,该光轨生成参数为同时生成的光轨数量;在第一输入之前,该视频预览窗中显示第一图像;上述步骤204具体可以通过下述步骤204a实现。
步骤204a、电子设备基于第一输入,将该视频预览窗中的第一图像更新为第二图像。
其中,电子设备在未接收到第一输入的情况下,将该视频预览窗中的第一图像更新为第三图像。与第一图像相比,第二图像中有N条光轨长度增加(即说明同时生成的光轨数量为N);与第一图像相比,第三图像中有M条光轨长度增加(即说明同时生成的光轨数量为M);N、M均为正整数,且N、M不同。
可以理解,本申请实施例中,在第一输入之前,一段时间内拍摄到的视频片段中同时生成的光轨数量为M,则若未接收到第一输入的情况下,拍摄到的视频片段中同时生成的光轨数量为M(例如第三图像)。第一输入是用户通过对数量控件的输入,触发电子设备将后续拍摄的视频片段中同时生成的光轨数量由M调节为N的输入。电子设备响应于第一输入,获取第二图像,并将视频预览窗中的第一图像更新为第二图像。
可以理解,本申请实施例中,第二图像中除了该N条光轨,还可以包括至少一条长度停止增加的光轨,第三图像中除了该M条光轨,还可以包括至少一条长度停止增加的光轨,本申请实施例不做限定。
可选地,本申请实施例中,该N条光轨可以与该M条光轨完全不同;也可以部分相同,部分不同;若N小于M,该N条光轨可以为该M条光轨中的光轨;若N大于M,该N条光轨可以包括该M条光轨;具体可以根据实际情况确定,本申请实施例不做限定。
本申请实施例中,在第一输入之后,用户可以通过输入(例如,指定光轨具体是那些光轨的输入、或者指定是拍摄视场中哪个区域的光轨的输入等,本申请实施例不做限定)指定该N条光轨是第二图像中的哪些光轨。
本申请实施例中,用户通过第一输入调节一段时间内的视频片段中同时生成的光轨数量,从而可以使拍摄形式多样化。
可选地,本申请实施例中,在拍摄目标视频的过程中,由于拍摄场景未改变,实际拍摄得到的光轨生成图像没有改变,本申请可以通过对实际拍摄得到的图像进行与第一输入对应的处理后,得到生成参数发生变化的图像(以下称为处理后的图像),并在视频预览窗中显示处理后的图像,并基于处理后的图像,得到目标视频。
示例性地,在上述步骤204a之前,本申请实施例提供的拍摄控制方法还可以包括下述的步骤10至步骤13。
步骤10、电子设备获取图像1(图像1为实际拍摄得到的光轨生成图像)。
步骤11、电子设备对图像1进行图像分割处理,得到S个光轨图像1,每个光轨图像1为:相对于实际拍摄的前一帧图像(实际场景中的光轨图像)中的对应光轨,光轨长度增长的光轨(图像1中的光轨)的图像,S为大于或等于P的正整数,P为M和N中的较大值。
步骤12、电子设备对第一图像进行图像分割处理,得到中间图像1,中间图像1为:从第一图像中分割出N个光轨图像2之后的图像,每个光轨图像2为:第一图像中与N条光轨中的一条光轨对应的光轨的图像。
步骤13、电子设备对中间图像1和N个光轨图像3进行图像合成处理,得到第二图像。
可选地,N个光轨图像3中的每个光轨图像3为:该S个光轨图像1中与光轨1对应的光轨图像1和之前保存的与光轨1对应的光轨图像集中,光轨长度最短的光轨图像(若N个光轨图像3中有之前保存的光轨图像,则在步骤13之后,从存储区域删除该保存的光轨图像),其中,光轨1为N条光轨中的一条光轨,不同的光轨图像3对应N条光轨中的不同光轨。
可选地,N个光轨图像3中的每个光轨图像3为:该S个光轨图像1中与光轨1对应的光轨图像1和之前保存的与光轨1对应的光轨图像集中,任意一个光轨图像。
其中,光轨1为N条光轨中与该每个光轨图像3对应的光轨。也就是说,光轨1为N条光轨中的一条光轨,不同的光轨图像3对应N条光轨中的不同光轨。
需要说明的是:本申请实施例中,针对每条被控制生成参数(停止生成)的光轨,有一个对应的光轨图像集。针对没有被控制生成参数的光轨,可以有光轨图像集(光轨图像集为空),也可以没有光轨图像集。一个光轨图像集可以是拍摄目标视频的初始生成的,也可以是在对应的光轨被控制生成参数的情况下才生成的,本申请实施例不做限定。
光轨图像集可以是一个集合,也可以不是一个集合(仅是一个单独的存储区域),本申请实施例不做限定。
可选地,在步骤13之后,本申请实施例提供的拍摄控制方法还可以包括下述的步骤14。
步骤14、电子设备保存该S个光轨图像1中除与N个光轨图像3中的光轨图像相同的光轨图像1之外的光轨图像1。
示例性地,假设目标视频针对的拍摄场景中实际包括4条光轨,分别为光轨a、光轨b、光轨c和光轨d,也就是说,在不实施本申请实施例提供的拍摄控制方法的情况下,目标视频中为4条光轨的生成过程视频。在目标视频的初始的视频片段1中,4条光轨(光轨a、光轨b、光轨c和光轨d)同时生成,然后接收到用户将同时生成的光轨数量由4条减少为2条的输入1(数量控件由4调为2),在接下来的视频片段2中,2条光轨同时生成(光轨a和光轨b),2条光轨从接收到输入1开始停止生成(光轨c和光轨d),将该过程中实际拍摄到的逐帧光轨生成图像中,光轨c对应的光轨图像分割出来,并逐帧保存在光轨图像集c中,光轨d对应的光轨图像分割出来,并逐帧保存在光轨图像集d中。一段时间后,接收到用户将同时生成的光轨数量由2条增加到3条的输入2(数量控件由2调为3),在接下来的视频片段3中,2条光轨继续同时生成(光轨a和光轨b),1条光轨(光轨c)由停止生成变为继续生成,具体地,逐帧从光轨图像集c中获取最短的光轨图像(之后从光轨图像集c删除该光轨图像)替换掉实际拍摄到的逐帧光轨生成图像中光轨c对应的光轨图像(将该替换下来的光轨图像保存至光轨图像集c)(也可以说,将从光轨图像集c中获取的逐帧最短光轨图像x与实际拍摄到的逐帧光轨生成图像中光轨c对应的光轨图像y对调),1条光轨继续停止生成(光轨d),将该过程中实际拍摄到的逐帧光轨生成图像中,光轨d对应的光轨图像分割出来,并逐帧保存在光轨图像集d中。又一段 时间后,接收到用户将同时生成的光轨数量由3条减少到1条的输入3(数量控件由3调为1),在接下来的视频片段4中,1条光轨继续生成(光轨a),2条光轨(光轨b和光轨c)由生成变为停止生成,1条光轨继续停止生成(光轨d),将该过程中实际拍摄到的逐帧光轨生成图像中,光轨b对应的光轨图像分割出来,并逐帧保存在光轨图像集b中,光轨c对应的光轨图像分割出来,并逐帧保存在光轨图像集c中,光轨d对应的光轨图像分割出来,并逐帧保存在光轨图像集d中。
需要说明的是,本申请实施例中,在第一输入之后,未接收到其他输入(对任意第一控件的输入、停止目标视频拍摄的输入)的情况下,电子设备通过循环执行上述步骤10至步骤14,可以得到(目标视频中)第一输入之后的视频片段。
可选地,本申请实施例中,可以通过对在未接收到第一输入的情况下,得到的图像(第三图像)进行与第一输入对应的处理后,得到生成参数发生变化的图像(处理后的图像,第二图像),并在视频预览窗中显示处理后的图像,并基于处理后的图像,得到目标视频。
示例性地,在上述步骤204a之前,本申请实施例提供的拍摄控制方法还可以包括下述的步骤204b至步骤204c。
步骤204b、电子设备获取第三图像。
步骤204c、电子设备对第三图像进行目标处理,得到第二图像。
其中,该目标处理为与第一输入对应的处理。
本申请实施例中,目标处理包括图像分割处理、图像合成处理,还可以包括其他图像处理方法,具体可以根据实际使用需求确定,本申请实施例不做限定。
可以理解,若在第一输入之前,电子设备未接收到对任意第一控件的其他输入,则第三图像是电子设备拍摄得到基于拍摄场景的实际光轨生成图像;若在第一输入之前,电子设备曾接收到对任意第一控件的其他输入,则第三图像可以是上述实际光轨生成图像,也可以是对上述实际光轨生成图像处理(与上述其他输入对应的处理)后的图像。
需要说明的是,本申请实施例中,在第一输入之后,未接收到其他输入(对任意第一控件的输入、停止目标视频拍摄的输入)的情况下,电子设备通过循环执行上述步骤204b至步骤204c,得到(目标视频中)第一输入之后的视频片段。
本申请实施例中,通过获取第三图像,并对第三图像进行目标处理,得到第二图像,从而可以得到用户需求的光轨生成图像,进而得到用户需求的光轨生成视频。
可选地,在N小于M的情况下,该M条光轨包括该N条光轨(即控制原来同时生成的光轨中的一部分光轨停止生成),上述步骤204c具体地可以通过下述步骤101至步骤103实现。
步骤101、电子设备对第三图像进行图像分割处理,得到第一中间图像。
其中,第一中间图像为:从第三图像中分割出M-N个第一光轨图像之后的图像,每个第一光轨图像为:该M条光轨中除该N条光轨之外的一条光轨的图像。
步骤102、电子设备对第一图像进行图像分割处理,得到M-N个第二光轨图像。
其中,每个第二光轨图像为:该M条光轨中除该N条光轨之外的一条光轨的图像。
步骤103、电子设备对第一中间图像和该M-N个第二光轨图像进行图像合成处理,得到第二图像。
需要说明的是,本申请实施例中,上述图像分割处理、图像合成处理等可以参考相关图像分割技术、图像合成技术等(下同),此处不予赘述。
可选地,上述步骤101具体地可以通过下述步骤101a实现,在步骤101a之后,本申请实施例提供的拍摄控制方法还可以包括下述的步骤104。
步骤101a、电子设备对第三图像进行图像分割处理,得到第一中间图像和M-N个第一光轨图像。
步骤104、电子设备保存该M-N个第一光轨图像。
可以理解,上述步骤101至步骤104,可以是在用户点击“拍摄”选项,开始拍摄目标视频之后,接收到的用户调节数量控件以减少同时生成光轨数量的输入(也可以是拍摄目标视频的过程中,用户多次调节数量控件之后,再次接收到的用户调节数量控件以减少同时生成光轨数量的输入),电子设备响应于该输入,把减少的(实际)正在生成的光轨(M条光轨中除N条光轨之外的光轨)的图像通过图像分割技术提取出来(如图3所示,可以通过图像分割技术将图像的背景和前景分割开来,也可以将前景中的光轨1和光轨2分割开来。如图4所示,将光轨2分割出来保存),然后按逐帧保存下来,供后续处理使用,在视频预览窗中该减少的(实际)正在生成的光轨停止生成。
可以理解,用户减少同时生成光轨数量的情况,如果此时实际正在生成的光轨数量大于用户设定的同时生成光轨数量,在视频预览窗中显示的图像中只包括用户设定的同时生成的光轨。那么多出来的光轨的图像可以先分割出来,并保存下来,后续当用户又设定该多出来的光轨中的一部分或全部光轨继续生成时,电子设备可以从存储区域把之前保存的该一部分或全部光轨对应的光轨图像取出来,再合成到预览图像中,在视频预览窗中进行显示,以体现该一部分或全部光轨的生成过程。
可选地,在N大于M的情况下,该N条光轨包括该M条光轨(即在原来的M条光轨继续保持生成的情况下,增加一些之前控制停止生成的光轨,重新恢复生成),上述步骤204c具体地可以通过下述步骤105至步骤106实现。
步骤105、电子设备获取N-M个第三光轨图像。
其中,每个所述第三光轨图像为:所述第一输入之前保存的,与一条第一目标光轨对应的光轨图像集中,光轨长度最短的光轨图像;所述一条第一目标光轨为:所述N条光轨中除所述M条光轨之外的N-M条光轨中的一条光轨。
可以理解,不同的第三光轨图像对应不同的第一目标光轨。
本申请实施例中,对光轨图像集的描述可以参考上述步骤13中对光轨图像集的相关描述,此处不予赘述。
步骤106、电子设备对所述第三图像和所述N-M个第三光轨图像进行图像合成处理,得到所述第二图像。
可选地,若第三图像中除该M条光轨图像之外可以不包括其他光轨,也可以包括其他光轨。可以理解为:第三图像中除该M条光轨图像之外不包括该N-M条光轨(第三图像中不包括该N-M条光轨),第三图像中除该M条光轨图像之外包括该N-M条光轨(第三图像中包括该N-M条光轨),第三图像中除该M条光轨图像之外包括该N-M条光轨中的一部分光轨(第三图像中包括该N-M条光轨中的一部分光轨)。针对,N-M条光轨中不在第三图像中的光轨,步骤106中的合成处理可以为将该不在第三图像中的光轨对应的第三光轨图像插入第三图像中的相应位置;针对,N-M条光轨中在第三图像中的光轨,步骤106中的合成处理可以为用该不在第三图像中的光轨对应的第三光轨图像替换第三图像中的相应位置的光轨图像。
可选地,在第三图像中包括该N-M条光轨的情况下,上述步骤106具体也可以通过下述步骤106a至步骤106b实现。
步骤106a、电子设备对第三图像进行图像分割处理,得到第二中间图像。
其中,第二中间图像为:从第三图像中分割出N-M个第五光轨图像之后的图像,每个第五光轨图像为:该N条光轨中除该M条光轨之外的一条光轨的图像。
可以理解,一个第五光轨图像对应一个第三光轨图像,不同的第五光轨图像对应不同的第三光轨图像。该光轨图像集中的每个光轨图像中的光轨的长度均大于对应的第五光轨图像中的光轨的长度。因为第五光轨图像为:第三图像中长度未增长的一条光轨的图像, 而该光轨图像集中保存的光轨图像都是实际长度增长,但未显示在视频预览窗中的光轨图像。
步骤106b、电子设备对第二中间图像和该N-M个第三光轨图像进行图像合成处理,得到第二图像。
可以理解,上述步骤105至步骤106是在用户减少同时生成的光轨数量之后,有一定数量缓存起来的光轨图像之后,又提高同时生成的光轨数量(将原来控制停止生成的光轨中的一部分或全部光轨又恢复生成)。此时由于有缓存的光轨,当提高光轨同时生成数量以后,可以将该一部分或全部光轨的实时生成图像分割出来并保存起来,然后将之前保存的对应光轨图像提取出来,合成到预览图像中,显示在视频预览窗中,也就是说,会在视频预览窗上显示大量之前缓存的光轨图像。
可选地,第一界面中还包括至少一个第二控件,每个第二控件用于确定该N-M个第三光轨图像的分布模式,上述步骤105之前,本申请实施例提供的拍摄控制方法还可以包括下述的步骤107至步骤108,上述步骤105具体可以通过下述步骤105a实现。
步骤107、电子设备接收用户对第二目标控件的第二输入。
其中,第二目标控件为该至少一个第二控件中的控件。
可选地,该至少一个第二控件包括以下至少一项:单点扩散分布控件,多点扩散分布控件,均匀分布控件,随机分布控件。
示例性地,该至少一个第二控件包括:单点扩散分布控件、多点扩散分布控件、均匀分布控件、随机分布控件,分别对应图2中的标记“4”指示的“单点扩散”、“多点扩散”、“均匀”、“随机”。
可以理解,本申请实施例中,如图5中的(a)所示,单点扩散分布控件用于控制恢复生成的光轨是围绕一个中心点分布的(即优先选取靠近中心点的光轨恢复生成,然后选取远离中心点的光轨恢复生成)。如图5中的(b)所示,多点扩散分布控件用于控制恢复生成的光轨是分别围绕多个中心点分布的(即优先选取任意靠近两个中心点的光轨恢复生成,然后选取任意远离两个中心点的光轨恢复生成)。如图6中的(a)所示,均匀分布控件用于控制恢复生成的光轨是在图像中均匀分布的,不要集中于任意一点或多点分布。如图6中的(b)所示,随机分布控件用于控制恢复生成的光轨是在图像中随机分布的,不限制其分布形式。
步骤108、电子设备响应于第二输入,确定该N-M个第三光轨图像的分布模式为目标模式。
步骤105a、电子设备响应于第二输入,从K个第四光轨图像中获取分布模式为目标模式的N-M个第四光轨图像,作为该N-M个第三光轨图像。
其中,每个第四光轨图像为:第一输入之前保存的,与一条第二目标光轨对应的光轨图像集中,光轨长度最短的光轨图像,一条第二目标光轨为除M条光轨之外的一条光轨,K为大于N-M的正整数。
可以理解,不同的第四光轨图像对应不同的第二目标光轨。
示例性地,如图4所示,当用户选择“单点扩散”选项时,如图7所示,响应于用户输入,电子设备控制恢复生成的光轨围绕用户指定的中心点向外扩散分布。若用户不选择任意一个第二控件,则电子设备可以按照原来的顺序(光轨出现时间的先后顺序)依次分布。
本申请实施例中,增加至少一个第二控件,用户可以根据自身需求选择对应的第二控件,以使恢复生成的光轨是按照用户需求分布的,从而可以增加拍摄形式。
可选地,第一目标控件为该至少一个第一控件中的速度控件,该光轨生成参数为光轨的生成速度,该速度控件用于调节处于生成状态的光轨的生成速度;上述步骤204具体可 以通过下述步骤204d实现。
步骤204d、电子设备在第一时间段内,以第一帧率更新该视频预览窗中的图像。
其中,第一时间段的起始时刻为接收到第一输入的时刻,第一时间段的结束时刻为接收到目标输入的时刻,该目标输入为以下任意一项:对该至少一个第一控件中的控件的输入,停止拍摄该目标视频的输入。
第一帧率是基于第一输入确定的,在未接收到第一输入的情况下,以第二帧率更新该视频预览窗中的图像;第二帧率与第一帧率不同。
可以理解,本申请实施例中,用户通过对速度控件的第一输入,调整了视频的帧率,从而可以改变光轨的生成速度。若降低帧率,则降低光轨的生成速度,若提高帧率,则提高光轨的生成速度。如此,可以增加拍摄形式。
可选地,在第一帧率小于第二帧率的情况下(即降低帧率),在上述步骤204d之后,本申请实施例提供的拍摄控制方法还可以包括下述的步骤205。
步骤205、电子设备保存第一目标图像。
第一目标图像为第一时间段内,未在该视频预览窗中更新的图像。
可以理解,本申请实施例中,若降低帧率,则有一部分生成的视频片段(即第一目标图像,本申请实施例中,不限定第一目标图像中的图像数量)无法在视频预览窗中显示,电子设备可以先将第一目标图像保存起来,以待后续提高帧率时使用。
本申请实施例中,降低帧率即延缓光轨的生成速度。具体地,如图8所示,电子设备可以将获得的预览图像保存下来,然后按照F*(X%)帧每秒的速度在视频预览窗中显示。其中,X%为用户设定的速度控件的值,F帧每秒为第一输入之前的帧率。
可选地,在第一帧率大于第二帧率的情况下(即提高帧率),在上述步骤204d之前,本申请实施例提供的拍摄控制方法还可以包括下述的步骤206,上述步骤204d具体可以通过下述步骤204d1实现。
步骤206、电子设备获取第二目标图像,第二目标图像为:第一输入之前,保存的未在该视频预览窗中更新的图像。
步骤204d1、电子设备在第一时间段内,基于第二目标图像和第三目标图像,以第一帧率更新该视频预览窗中的图像。
其中,第三目标图像为:第一时间段内生成的图像。
本申请实施例中,在之前用户通过降低速率的输入延缓光轨的生成速度的前提下,本步骤才可以通过提高帧率的方法加快光轨的生成速度,甚至使光轨的生成速度高于实际生成速度(高于F帧每秒)。具体地,(如图9所示),电子设备把当前生成的图像逐帧保存下来,然后把之前逐帧缓存下来的图像以F*(Y%)帧每秒的速度重新在视频预览窗中显示。其中,其中,Y%为用户设定的速度控件的值。
可选地,用户还可以通过点击“停止拍摄”按钮,结束拍摄,并生成目标视频。具体地,用户可以手动停止视频录制,电子设备把录制出来的视频保存到本地,供用户查看。
本申请实施例中,在拍摄光轨生成过程的视频的过程中,在电子设备相机的流光快门视频模式下,用户可以通过对速度控件的输入调节光轨的生成速度,以使光轨延缓生成或加快生成。
需要说明的是,本申请实施例提供的拍摄控制方法,执行主体可以为拍摄控制装置,或者该拍摄控制装置中的用于执行拍摄控制方法的功能模块和/或功能实体。本申请实施例中以拍摄控制装置执行拍摄控制方法为例,说明本申请实施例提供的拍摄控制方法的装置。
图10示出了本申请实施例中涉及的拍摄控制装置的一种可能的结构示意图。如图10所示,该拍摄控制装置300可以包括:显示模块301、接收模块302、调节模块303和更 新模块304;该显示模块301,用于在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;该接收模块302,用于接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;该调节模块303,用于响应于该接收模块接收的第一输入,调节与第一目标控件对应的光轨生成参数;该更新模块304,用于基于调节后的光轨生成参数,更新该视频预览窗中的图像。
可选地,第一目标控件为该至少一个第一控件中的数量控件,该光轨生成参数为同时生成的光轨数量;在第一输入之前,该视频预览窗中显示第一图像;该更新模块304,具体用于将该视频预览窗中的第一图像更新为第二图像;其中,在未接收到第一输入的情况下,该更新模块304用于将该视频预览窗中的第一图像更新为第三图像;与第一图像相比,第二图像中有N条光轨长度增加;与第一图像相比,第三图像中有M条光轨长度增加;N、M均为正整数,且N、M不同。
可选地,该拍摄控制装置300还包括:获取模块305和图像处理模块306;该获取模块305,用于在该更新模块304将该视频预览窗中的第一图像更新为第二图像之前,获取第三图像;该图像处理模块306,用于对第三图像进行目标处理,得到第二图像。
可选地,在N小于M的情况下,该M条光轨包括该N条光轨,该图像处理模块306,具体用于对第三图像进行图像分割处理,得到第一中间图像,第一中间图像为:从第三图像中分割出M-N个第一光轨图像之后的图像,每个第一光轨图像为:该M条光轨中除该N条光轨之外的一条光轨的图像;对第一图像进行图像分割处理,得到M-N个第二光轨图像,每个第二光轨图像为:该M条光轨中除该N条光轨之外的一条光轨的图像;对第一中间图像和该M-N个第二光轨图像进行图像合成处理,得到第二图像。
可选地,该拍摄控制装置300还包括:保存模块307;该图像处理模块306,具体用于对第三图像进行图像分割处理,得到第一中间图像和M-N个第一光轨图像;该保存模块307,用于保存该M-N个第一光轨图像。
可选地,在N大于M的情况下,该N条光轨包括该M条光轨,该图像处理模块306,用于获取N-M个第三光轨图像;对第三图像和该N-M个第三光轨图像进行图像合成处理,得到第二图像;其中,每个第三光轨图像为:第一输入之前保存的,与一条第一目标光轨对应的光轨图像集中,光轨长度最短的光轨图像;该一条第一目标光轨为:该N条光轨中除该M条光轨之外的N-M条光轨中的一条光轨。
可选地,该拍摄控制装置300装置还包括:确定模块308;第一界面中还包括至少一个第二控件,该接收模块302,还用于在该获取N-M个第三光轨图像之前,接收用户对第二目标控件的第二输入,第二目标控件为该至少一个第二控件中的控件;该确定模块308,用于响应于该接收模块接收的第二输入,确定该N-M个第三光轨图像的分布模式为目标模式;该获取模块305,具体用于从K个第四光轨图像中获取分布模式为目标模式的N-M个第四光轨图像,作为该N-M个第三光轨图像;其中,每个第四光轨图像为:第一输入之前保存的,与一条第二目标光轨对应的光轨图像集中,光轨长度最短的光轨图像,该一条第二目标光轨为除该M条光轨之外的一条光轨,K为大于N-M的正整数。
可选地,该至少一个第二控件包括以下至少一项:单点扩散分布控件,多点扩散分布控件,均匀分布控件,随机分布控件。
可选地,第一目标控件为该至少一个第一控件中的速度控件,该光轨生成参数为光轨的生成速度;该更新模块304,具体用于在第一时间段内,以第一帧率更新该视频预览窗中的图像;其中,第一时间段的起始时刻为接收到第一输入的时刻,第一时间段的结束时刻为接收到目标输入的时刻,该目标输入为以下任意一项:对该至少一个第一控件中的控件的输入,停止拍摄该目标视频的输入;在未接收到第一输入的情况下,以第二帧率更新 该视频预览窗中的图像;第二帧率与第一帧率不同。
可选地,该拍摄控制装置300还包括:保存模块307;该保存模块307,用于在第一帧率小于第二帧率的情况下,该在第一时间段内,以第一帧率更新该视频预览窗中的图像之后,保存第一目标图像,第一目标图像为第一时间段内,未在该视频预览窗中更新的图像。
可选地,该拍摄控制装置300还包括:获取模块305;该获取模块305,用于在第一帧率大于第二帧率的情况下,该在第一时间段内,以第一帧率更新该视频预览窗中的图像之前,获取第二目标图像,第二目标图像为:第一输入之前,保存的未在该视频预览窗中更新的图像;该更新模块304,具体用于在第一时间段内,基于第二目标图像和第三目标图像,以第一帧率更新该视频预览窗中的图像;其中,第三目标图像为:第一时间段内生成的图像。
需要说明的是:如图10所示,该拍摄控制装置300中一定包括的模块用实线框示意,如显示模块301、接收模块302、调节模块303和更新模块304;该拍摄控制装置300中可以包括也可以不包括的模块用虚线框示意,如获取模块305、图像处理模块306、保存模块307和确定模块308。
本申请实施例提供一种拍摄控制装置,可以通过在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。该方案中,在拍摄光轨的生成过程视频的过程中,用户可以通过对第一目标控件(至少一个第一控件中的控件,每个第一控件用于调节光轨的生成参数)的第一输入,调节光轨的生成参数,更新视频预览窗中的图像,如此,在拍摄过程中,用户可以通过输入改变光轨生成过程中的生成参数(即调节拍摄形式),使拍摄形式多样化,从而可以解决相关技术光轨生成参数固定,拍摄形式比较单一的问题。
本申请实施例中的拍摄控制装置可以是装置,也可以是电子设备或电子设备中的部件、集成电路、或芯片。该电子设备可以是移动电子设备,也可以为非移动电子设备。示例性地,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的拍摄控制装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为iOS操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的拍摄控制装置能够实现图1至图9的方法实施例实现的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
可选地,如图11所示,本申请实施例还提供一种电子设备400,包括处理器401,存储器402,存储在存储器402上并可在所述处理器401上运行的程序或指令,该程序或指令被处理器401执行时实现上述拍摄控制方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图12为实现本申请实施例的一种电子设备的硬件结构示意图。该电子设备500包括但不限于:射频单元501、网络模块502、音频输出单元503、输入单元504、传感器505、 显示单元506、用户输入单元507、接口单元508、存储器509、以及处理器510等部件。
本领域技术人员可以理解,电子设备500还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器510逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图12中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,显示单元506,用于在拍摄目标视频的过程中,显示第一界面,目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;用户输入单元507,用于接收用户对第一目标控件的第一输入,第一目标控件为至少一个第一控件中的控件;处理器510,用于响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。
可选地,第一目标控件为至少一个第一控件中的数量控件,光轨生成参数为同时生成的光轨数量;在第一输入之前,视频预览窗中显示第一图像;处理器510,具体用于将视频预览窗中的第一图像更新为第二图像;其中,在未接收到第一输入的情况下,处理器510用于将视频预览窗中的第一图像更新为第三图像;与第一图像相比,第二图像中有N条光轨长度增加;与第一图像相比,第三图像中有M条光轨长度增加;N、M均为正整数,且N、M不同。
可选地,处理器510,还用于在将视频预览窗中的第一图像更新为第二图像之前,获取第三图像,对第三图像进行目标处理,得到第二图像。
可选地,在N小于M的情况下,M条光轨包括N条光轨,处理器510,具体用于对第三图像进行图像分割处理,得到第一中间图像,第一中间图像为:从第三图像中分割出M-N个第一光轨图像之后的图像,每个第一光轨图像为:M条光轨中除N条光轨之外的一条光轨的图像;对第一图像进行图像分割处理,得到M-N个第二光轨图像,每个第二光轨图像为:M条光轨中除N条光轨之外的一条光轨的图像;对第一中间图像和M-N个第二光轨图像进行图像合成处理,得到第二图像。
可选地,处理器510,具体用于对第三图像进行图像分割处理,得到第一中间图像和M-N个第一光轨图像;还用于保存M-N个第一光轨图像。
可选地,在N大于M的情况下,N条光轨包括M条光轨,处理器510,具体用于获取N-M个第三光轨图像;对第三图像和N-M个第三光轨图像进行图像合成处理,得到第二图像;其中,每个第三光轨图像为:第一输入之前保存的,与一条第一目标光轨对应的光轨图像集中,光轨长度最短的光轨图像;一条第一目标光轨为:N条光轨中除M条光轨之外的N-M条光轨中的一条光轨。
可选地,第一界面中还包括至少一个第二控件,用户输入单元507,还用于在获取N-M个第三光轨图像之前,接收用户对第二目标控件的第二输入,第二目标控件为至少一个第二控件中的控件;处理器510,还用于响应于第二输入,确定N-M个第三光轨图像的分布模式为目标模式;处理器510,具体用于从K个第四光轨图像中获取分布模式为目标模式的N-M个第四光轨图像,作为N-M个第三光轨图像;其中,每个第四光轨图像为:第一输入之前保存的,与一条第二目标光轨对应的光轨图像集中,光轨长度最短的光轨图像,一条第二目标光轨为除M条光轨之外的一条光轨,K为大于N-M的正整数。
可选地,至少一个第二控件包括以下至少一项:单点扩散分布控件,多点扩散分布控件,均匀分布控件,随机分布控件。
可选地,第一目标控件为至少一个第一控件中的速度控件,光轨生成参数为光轨的生成速度;处理器510,具体用于在第一时间段内,以第一帧率更新视频预览窗中的图像;其中,第一时间段的起始时刻为接收到第一输入的时刻,第一时间段的结束时刻为接收到 目标输入的时刻,目标输入为以下任意一项:对至少一个第一控件中的控件的输入,停止拍摄目标视频的输入;在未接收到第一输入的情况下,以第二帧率更新视频预览窗中的图像;第二帧率与第一帧率不同。
可选地,处理器510,还用于在第一帧率小于第二帧率的情况下,在第一时间段内,以第一帧率更新视频预览窗中的图像之后,保存第一目标图像,第一目标图像为第一时间段内,未在视频预览窗中更新的图像。
可选地,处理器510,还用于在第一帧率大于第二帧率的情况下,在第一时间段内,以第一帧率更新视频预览窗中的图像之前,获取第二目标图像,第二目标图像为:第一输入之前,保存的未在视频预览窗中更新的图像;还具体用于在第一时间段内,基于第二目标图像和第三目标图像,以第一帧率更新视频预览窗中的图像;其中,第三目标图像为:第一时间段内生成的图像。
本申请实施例提供的电子设备,可以通过在拍摄目标视频的过程中,显示第一界面,该目标视频为光轨的生成过程视频,第一界面包括视频预览窗和至少一个第一控件;接收用户对第一目标控件的第一输入,第一目标控件为该至少一个第一控件中的控件;响应于第一输入,调节与第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新该视频预览窗中的图像。该方案中,在拍摄光轨的生成过程视频的过程中,用户可以通过对第一目标控件(至少一个第一控件中的控件,每个第一控件用于调节光轨的生成参数)的第一输入,调节光轨的生成参数,更新视频预览窗中的图像,如此,在拍摄过程中,用户可以通过输入改变光轨生成过程中的生成参数(即调节拍摄形式),使拍摄形式多样化,从而可以解决相关技术光轨生成参数固定,拍摄形式比较单一的问题。
应理解的是,本申请实施例中,射频单元501可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器510处理;另外,将上行的数据发送给基站。此外,射频单元501还可以通过无线通信系统与网络和其他设备通信。电子设备通过网络模块502为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。音频输出单元503可以将射频单元501或网络模块502接收的或者在存储器509中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元503还可以提供与电子设备500执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。输入单元504可以包括图形处理器(Graphics Processing Unit,GPU)5041和麦克风5042,图形处理器5041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元506可包括显示面板5061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板5061。用户输入单元507包括触控面板5071以及其他输入设备5072。触控面板5071,也称为触摸屏。触控面板5071可包括触摸检测装置和触摸控制器两个部分。其他输入设备5072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器509可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器510可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器510中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述拍摄控制方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述拍摄控制方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (22)

  1. 一种拍摄控制方法,所述方法包括:
    在拍摄目标视频的过程中,显示第一界面,所述目标视频为光轨的生成过程视频,所述第一界面包括视频预览窗和至少一个第一控件;
    接收对第一目标控件的第一输入,所述第一目标控件为所述至少一个第一控件中的控件;
    响应于所述第一输入,调节与所述第一目标控件对应的光轨生成参数,并基于调节后的光轨生成参数,更新所述视频预览窗中的图像。
  2. 根据权利要求1所述的方法,其中,所述第一目标控件为所述至少一个第一控件中的数量控件,所述光轨生成参数为同时生成的光轨数量;在所述第一输入之前,所述视频预览窗中显示第一图像;
    所述基于调节后的光轨生成参数,更新所述视频预览窗中的图像,包括:
    将所述视频预览窗中的所述第一图像更新为第二图像;
    其中,在未接收到所述第一输入的情况下,将所述视频预览窗中的所述第一图像更新为第三图像;
    与所述第一图像相比,所述第二图像中有N条光轨长度增加;与所述第一图像相比,所述第三图像中有M条光轨长度增加;N、M均为正整数,且N、M不同。
  3. 根据权利要求2所述的方法,其中,
    所述将所述视频预览窗中的所述第一图像更新为第二图像之前,所述方法还包括:
    获取所述第三图像,对所述第三图像进行目标处理,得到所述第二图像。
  4. 根据权利要求3所述的方法,其中,在N小于M的情况下,所述M条光轨包括所述N条光轨,所述对所述第三图像进行目标处理,得到所述第二图像,包括:
    对所述第三图像进行图像分割处理,得到第一中间图像,所述第一中间图像为:从所述第三图像中分割出M-N个第一光轨图像之后的图像,每个所述第一光轨图像为:所述M条光轨中除所述N条光轨之外的一条光轨的图像;
    对所述第一图像进行图像分割处理,得到M-N个第二光轨图像,每个所述第二光轨图像为:所述M条光轨中除所述N条光轨之外的一条光轨的图像;
    对所述第一中间图像和所述M-N个第二光轨图像进行图像合成处理,得到所述第二图像。
  5. 根据权利要求3所述的方法,其中,在N大于M的情况下,所述N条光轨包括所述M条光轨,所述对所述第三图像进行目标处理,得到所述第二图像,包括:
    获取N-M个第三光轨图像;
    对所述第三图像和所述N-M个第三光轨图像进行图像合成处理,得到所述第二图像;
    其中,每个所述第三光轨图像为:所述第一输入之前保存的,与一条第一目标光轨对应的光轨图像集中,光轨长度最短的光轨图像;所述一条第一目标光轨为:所述N条光轨中除所述M条光轨之外的N-M条光轨中的一条光轨。
  6. 根据权利要求5所述的方法,其中,所述第一界面中还包括至少一个第二控件,所述获取N-M个第三光轨图像之前,所述方法还包括:
    接收用户对第二目标控件的第二输入,所述第二目标控件为所述至少一个第二控件中的控件;
    响应于所述第二输入,确定所述N-M个第三光轨图像的分布模式为目标模式;
    所述获取N-M个第三光轨图像,包括:
    从K个第四光轨图像中获取分布模式为目标模式的N-M个第四光轨图像,作为所述N-M个第三光轨图像;
    其中,每个所述第四光轨图像为:所述第一输入之前保存的,与一条第二目标光轨对应的光轨图像集中,光轨长度最短的光轨图像,所述一条第二目标光轨为除所述M条光轨之外的一条光轨,K为大于N-M的正整数。
  7. 根据权利要求6所述的方法,其中,所述至少一个第二控件包括以下至少一项:
    单点扩散分布控件,多点扩散分布控件,均匀分布控件,随机分布控件。
  8. 根据权利要求1所述的方法,其中,所述第一目标控件为所述至少一个第一控件中的速度控件,所述光轨生成参数为光轨的生成速度;
    所述基于调节后的光轨生成参数,更新所述视频预览窗中的图像,包括:
    在第一时间段内,以第一帧率更新所述视频预览窗中的图像;
    其中,所述第一时间段的起始时刻为接收到所述第一输入的时刻,所述第一时间段的结束时刻为接收到目标输入的时刻,所述目标输入为以下任意一项:对所述至少一个第一控件中的控件的输入,停止拍摄所述目标视频的输入;
    在未接收到所述第一输入的情况下,以第二帧率更新所述视频预览窗中的图像;所述第二帧率与所述第一帧率不同。
  9. 根据权利要求8所述的方法,其中,在所述第一帧率小于所述第二帧率的情况下,所述在第一时间段内,以第一帧率更新所述视频预览窗中的图像之后,所述方法还包括:
    保存第一目标图像,所述第一目标图像为所述第一时间段内,未在所述视频预览窗中更新的图像。
  10. 根据权利要求8所述的方法,其中,在所述第一帧率大于所述第二帧率的情况下,所述在第一时间段内,以第一帧率更新所述视频预览窗中的图像之前,所述方法还包括:
    获取第二目标图像,所述第二目标图像为:所述第一输入之前,保存的未在所述视频预览窗中更新的图像;
    所述在第一时间段内,以第一帧率更新所述视频预览窗中的图像,包括:
    在所述第一时间段内,基于所述第二目标图像和第三目标图像,以第一帧率更新所述视频预览窗中的图像;
    其中,所述第三目标图像为:所述第一时间段内生成的图像。
  11. 一种拍摄控制装置,所述装置包括:显示模块、接收模块、调节模块和更新模块;
    所述显示模块,用于在拍摄目标视频的过程中,显示第一界面,所述目标视频为光轨的生成过程视频,所述第一界面包括视频预览窗和至少一个第一控件;
    所述接收模块,用于接收对第一目标控件的第一输入,所述第一目标控件为所述至少一个第一控件中的控件;
    所述调节模块,用于响应于所述接收模块接收的所述第一输入,调节与所述第一目标控件对应的光轨生成参数;
    所述更新模块,用于基于调节后的光轨生成参数,更新所述视频预览窗中的图像。
  12. 根据权利要求11所述的装置,其中,所述第一目标控件为所述至少一个第一控件中的数量控件,所述光轨生成参数为同时生成的光轨数量;在所述第一输入之前,所述视频预览窗中显示第一图像;
    所述更新模块,具体用于将所述视频预览窗中的所述第一图像更新为第二图像;
    其中,在未接收到所述第一输入的情况下,所述更新模块用于将所述视频预览窗 中的所述第一图像更新为第三图像;
    与所述第一图像相比,所述第二图像中有N条光轨长度增加;与所述第一图像相比,所述第三图像中有M条光轨长度增加;N、M均为正整数,且N、M不同。
  13. 根据权利要求12所述的装置,其中,所述装置还包括:获取模块和图像处理模块;
    所述获取模块,用于在所述更新模块将所述视频预览窗中的所述第一图像更新为第二图像之前,获取所述第三图像;
    所述图像处理模块,用于对所述第三图像进行目标处理,得到所述第二图像。
  14. 根据权利要求13所述的装置,其中,在N小于M的情况下,所述M条光轨包括所述N条光轨,所述图像处理模块,具体用于对所述第三图像进行图像分割处理,得到第一中间图像,所述第一中间图像为:从所述第三图像中分割出M-N个第一光轨图像之后的图像,每个所述第一光轨图像为:所述M条光轨中除所述N条光轨之外的一条光轨的图像;对所述第一图像进行图像分割处理,得到M-N个第二光轨图像,每个所述第二光轨图像为:所述M条光轨中除所述N条光轨之外的一条光轨的图像;对所述第一中间图像和所述M-N个第二光轨图像进行图像合成处理,得到所述第二图像。
  15. 根据权利要求13所述的装置,其中,在N大于M的情况下,所述N条光轨包括所述M条光轨,所述图像处理模块,用于获取N-M个第三光轨图像;对所述第三图像和所述N-M个第三光轨图像进行图像合成处理,得到所述第二图像;其中,每个所述第三光轨图像为:所述第一输入之前保存的,与一条第一目标光轨对应的光轨图像集中,光轨长度最短的光轨图像;所述一条第一目标光轨为:所述N条光轨中除所述M条光轨之外的N-M条光轨中的一条光轨。
  16. 根据权利要求11所述的装置,其中,所述第一目标控件为所述至少一个第一控件中的速度控件,所述光轨生成参数为光轨的生成速度;
    所述更新模块,具体用于在第一时间段内,以第一帧率更新所述视频预览窗中的图像;
    其中,所述第一时间段的起始时刻为接收到所述第一输入的时刻,所述第一时间段的结束时刻为接收到目标输入的时刻,所述目标输入为以下任意一项:对所述至少一个第一控件中的控件的输入,停止拍摄所述目标视频的输入;
    在未接收到所述第一输入的情况下,以第二帧率更新所述视频预览窗中的图像;所述第二帧率与所述第一帧率不同。
  17. 根据权利要求16所述的装置,其中,所述装置还包括:保存模块;
    所述保存模块,用于在所述第一帧率小于所述第二帧率的情况下,所述在第一时间段内,以第一帧率更新所述视频预览窗中的图像之后,保存第一目标图像,所述第一目标图像为所述第一时间段内,未在所述视频预览窗中更新的图像。
  18. 根据权利要求16所述的装置,其中,所述装置还包括:获取模块;
    所述获取模块,用于在所述第一帧率大于所述第二帧率的情况下,所述在第一时间段内,以第一帧率更新所述视频预览窗中的图像之前,获取第二目标图像,所述第二目标图像为:所述第一输入之前,保存的未在所述视频预览窗中更新的图像;
    所述更新模块,具体用于在所述第一时间段内,基于所述第二目标图像和第三目标图像,以第一帧率更新所述视频预览窗中的图像;
    其中,所述第三目标图像为:所述第一时间段内生成的图像。
  19. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至10 中任一项所述的拍摄控制方法的步骤。
  20. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至10中任一项所述的拍摄控制方法的步骤。
  21. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至10中任一项所述的拍摄控制方法。
  22. 一种电子设备,包括所述电子设备被配置成用于执行如权利要求1至10中任一项所述的拍摄控制方法。
PCT/CN2022/073935 2021-01-28 2022-01-26 拍摄控制方法、装置和电子设备 WO2022161383A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023545373A JP2024504744A (ja) 2021-01-28 2022-01-26 撮影制御方法、装置と電子機器
EP22745257.0A EP4287611A1 (en) 2021-01-28 2022-01-26 Filming control method and apparatus, and electronic device
US18/227,883 US20230412913A1 (en) 2021-01-28 2023-07-28 Shooting control method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110121815.5 2021-01-28
CN202110121815.5A CN112954201B (zh) 2021-01-28 2021-01-28 拍摄控制方法、装置和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/227,883 Continuation US20230412913A1 (en) 2021-01-28 2023-07-28 Shooting control method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2022161383A1 true WO2022161383A1 (zh) 2022-08-04

Family

ID=76239082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/073935 WO2022161383A1 (zh) 2021-01-28 2022-01-26 拍摄控制方法、装置和电子设备

Country Status (5)

Country Link
US (1) US20230412913A1 (zh)
EP (1) EP4287611A1 (zh)
JP (1) JP2024504744A (zh)
CN (1) CN112954201B (zh)
WO (1) WO2022161383A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954201B (zh) * 2021-01-28 2022-09-27 维沃移动通信有限公司 拍摄控制方法、装置和电子设备
CN114222069B (zh) * 2022-01-28 2024-04-30 维沃移动通信有限公司 拍摄方法、拍摄装置及电子设备
CN116723382B (zh) * 2022-02-28 2024-05-03 荣耀终端有限公司 一种拍摄方法及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293758A1 (en) * 2010-10-12 2013-11-07 Ability Enterprise Co., Ltd. Method of producing an image
CN109688331A (zh) * 2019-01-10 2019-04-26 深圳市阿力为科技有限公司 一种延时摄影方法及装置
CN110035141A (zh) * 2019-02-22 2019-07-19 华为技术有限公司 一种拍摄方法及设备
CN110995993A (zh) * 2019-12-06 2020-04-10 北京小米移动软件有限公司 星轨视频拍摄方法、星轨视频拍摄装置及存储介质
CN112954201A (zh) * 2021-01-28 2021-06-11 维沃移动通信有限公司 拍摄控制方法、装置和电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079835A (zh) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 拍摄星云视频的方法和装置
CN105072350B (zh) * 2015-06-30 2019-09-27 华为技术有限公司 一种拍照方法及装置
CN106060384B (zh) * 2016-05-31 2019-07-19 努比亚技术有限公司 拍照控制方法和装置
WO2018137264A1 (zh) * 2017-01-26 2018-08-02 华为技术有限公司 终端的拍照方法、拍照装置和终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293758A1 (en) * 2010-10-12 2013-11-07 Ability Enterprise Co., Ltd. Method of producing an image
CN109688331A (zh) * 2019-01-10 2019-04-26 深圳市阿力为科技有限公司 一种延时摄影方法及装置
CN110035141A (zh) * 2019-02-22 2019-07-19 华为技术有限公司 一种拍摄方法及设备
CN110995993A (zh) * 2019-12-06 2020-04-10 北京小米移动软件有限公司 星轨视频拍摄方法、星轨视频拍摄装置及存储介质
CN112954201A (zh) * 2021-01-28 2021-06-11 维沃移动通信有限公司 拍摄控制方法、装置和电子设备

Also Published As

Publication number Publication date
CN112954201A (zh) 2021-06-11
JP2024504744A (ja) 2024-02-01
EP4287611A1 (en) 2023-12-06
US20230412913A1 (en) 2023-12-21
CN112954201B (zh) 2022-09-27

Similar Documents

Publication Publication Date Title
WO2022161383A1 (zh) 拍摄控制方法、装置和电子设备
CN105190511B (zh) 图像处理方法、图像处理装置和图像处理程序
WO2022111463A1 (zh) 拍摄方法、拍摄装置和电子设备
RU2745737C1 (ru) Способ видеозаписи и видеозаписывающий терминал
WO2021254502A1 (zh) 目标对象显示方法、装置及电子设备
WO2022116962A1 (zh) 视频播放方法、装置及电子设备
CN108683852A (zh) 一种视频录制方法、终端及计算机可读存储介质
CN112954199B (zh) 视频录制方法及装置
WO2022161240A1 (zh) 拍摄方法、装置、电子设备及介质
CN112672061B (zh) 视频拍摄方法、装置、电子设备及介质
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
CN111147779A (zh) 视频制作方法、电子设备及介质
WO2022257999A1 (zh) 拍摄方法、装置、电子设备及存储介质
WO2023083132A1 (zh) 拍摄方法、装置、电子设备和可读存储介质
CN114245028A (zh) 图像展示方法、装置、电子设备及存储介质
WO2022156703A1 (zh) 一种图像显示方法、装置及电子设备
US20230368338A1 (en) Image display method and apparatus, and electronic device
JP2013162221A (ja) 情報処理装置、情報処理方法、情報処理プログラム
WO2023125159A1 (zh) 视频生成电路、方法和电子设备
WO2022161310A1 (zh) 显示方法、装置和电子设备
CN114125297B (zh) 视频拍摄方法、装置、电子设备及存储介质
CN112367467B (zh) 显示控制方法、装置、电子设备和介质
CN112604282B (zh) 虚拟镜头控制方法及装置
CN114650370A (zh) 图像拍摄方法、装置、电子设备及可读存储介质
CN113923392A (zh) 视频录制方法、视频录制装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22745257

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023545373

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2022745257

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022745257

Country of ref document: EP

Effective date: 20230828