WO2022109897A1 - 延时摄影方法及设备、延时摄影视频生成方法及设备 - Google Patents

延时摄影方法及设备、延时摄影视频生成方法及设备 Download PDF

Info

Publication number
WO2022109897A1
WO2022109897A1 PCT/CN2020/131657 CN2020131657W WO2022109897A1 WO 2022109897 A1 WO2022109897 A1 WO 2022109897A1 CN 2020131657 W CN2020131657 W CN 2020131657W WO 2022109897 A1 WO2022109897 A1 WO 2022109897A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
exposure
frame
image
time
Prior art date
Application number
PCT/CN2020/131657
Other languages
English (en)
French (fr)
Inventor
郭浩铭
李静
李广
朱传杰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080067831.4A priority Critical patent/CN114514738A/zh
Priority to PCT/CN2020/131657 priority patent/WO2022109897A1/zh
Publication of WO2022109897A1 publication Critical patent/WO2022109897A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the present invention relates to the technical field of photography, and in particular, to a time-lapse photography method and device, and a time-lapse photography video generation method and device.
  • Time-lapse photography is a shooting and video processing method that uses a playback frame rate higher than the shooting frame rate for playback.
  • Embodiments of the present invention provide a time-lapse photography method and device, and a time-lapse photography video generation method and device, which are used to solve at least one of the above technical problems.
  • an embodiment of the present invention provides a time-lapse photography method, which is applied to a photography device, and the method includes:
  • multiple composite images corresponding to the multiple frame images are obtained by multi-exposure fusion;
  • a time-lapse video is synthesized from the plurality of synthesized images.
  • an embodiment of the present invention provides a time-lapse photography video generation method, which is applied to a time-lapse photography video generation device that is communicatively connected to a photography device, and the method includes:
  • a time-lapse video is synthesized from the plurality of synthesized images.
  • an embodiment of the present invention further provides a photographing device, which includes:
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to cause the at least one
  • the processor can execute the time-lapse photography method described in any of the foregoing embodiments.
  • an embodiment of the present invention further provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the time-lapse photography method described in any of the foregoing embodiments are implemented.
  • an embodiment of the present invention further provides a movable platform, including:
  • the photographing apparatus mounted on the movable body.
  • an embodiment of the present invention further provides a time-lapse photography video generation device, which is communicatively connected to the photography device, including:
  • the processor can execute the time-lapse video generation method described in any of the foregoing embodiments.
  • an embodiment of the present invention further provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the time-lapse photography video generation method described in any of the foregoing embodiments are implemented. .
  • an embodiment of the present invention further provides a time-lapse photography system, comprising: the photography device described in any of the foregoing embodiments and the time-lapse photography video generation described in any of the foregoing embodiments that is communicatively connected to the photography device equipment.
  • the beneficial effect of the embodiment of the present invention is that: firstly, multiple frames of images with different exposure values are collected, then multiple smooth bracketed exposure images corresponding to each frame of image are generated, and then each frame is merged by means of multi-exposure fusion.
  • the multiple exposure bracketed images of the image are synthesized to obtain multiple synthesized images, which are used for synthesizing time-lapse photography videos. Since the exposure value of each image in the multiple exposure bracketing images is smooth, the exposure values of two temporally adjacent composite images obtained by multi-exposure fusion also achieve a smooth effect, so that the composite images based on The resulting time-lapse video will not flicker.
  • FIG. 1 is a flowchart of an embodiment of a time-lapse photography method of the present invention
  • FIG. 2 is a schematic diagram of obtaining N frames of images by time-lapse shooting in the present invention
  • FIG. 3 is a schematic diagram of generating smooth multiple exposure images for each frame of image in the present invention.
  • FIG. 4 is a schematic diagram of obtaining multiple composite images based on the fusion of multiple exposure bracketing images of each frame of images in the present invention
  • FIG. 6 is a schematic diagram of the dynamic range of images at different times in the present invention.
  • FIG. 8 is a schematic diagram of the extended dynamic range supported by the time-lapse photography method in the present invention.
  • FIG. 9 is a schematic diagram of the dynamic range of the smooth multiple exposure bracketing images generated by the time-lapse photography method in the present invention.
  • Fig. 10a is a schematic diagram of an embodiment of a photographing device of the present invention.
  • 10b is a schematic diagram of an embodiment of a movable platform of the present invention.
  • FIG. 11 is a flowchart of an embodiment of a time-lapse photography video generation method of the present invention.
  • FIG. 12 is a schematic diagram of an embodiment of the time-lapse photography system of the present invention.
  • FIG. 13 is a flowchart of another embodiment of the time-lapse photography method of the present invention.
  • the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, elements, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.
  • module refers to relevant entities applied to a computer, such as hardware, a combination of hardware and software, software or software in execution, and the like.
  • an element may be, but is not limited to, a process running on a processor, a processor, an object, an executable element, a thread of execution, a program, and/or a computer.
  • an application program or script program running on the server, and the server can be a component.
  • One or more elements may be in a process and/or thread of execution and an element may be localized on one computer and/or distributed between two or more computers and may be executed from various computer readable media .
  • Elements may also pass through a signal having one or more data packets, for example, a signal from one interacting with another element in a local system, in a distributed system, and/or with data interacting with other systems through a network of the Internet local and/or remote processes to communicate.
  • the embodiments of the present invention provide a time-lapse photography method, which is mainly used to solve the flicker problem existing in time-lapse photography videos.
  • the entire process of the time-lapse photography method may be completed on the photographing device side.
  • the photographer can perform the time-lapse photography method of the present invention to obtain a time-lapse photography video by setting the camera side, for example, the entire photography process.
  • the image processing process and the synthesis process of the time-lapse photography video are all completed on the camera side.
  • the photographing equipment may be a video camera, a wearable electronic device, a mobile phone, a tablet computer, a notebook computer, etc., which is not covered in the present invention. limit.
  • the photographing equipment can be installed on a movable platform, and the movable platform can be an unmanned aerial vehicle (unmanned aircraft or unmanned vehicle or unmanned boat, etc.), a handheld gimbal, etc., which is not limited in the present invention. .
  • an embodiment of the time-lapse photography method of the present invention includes the following method steps:
  • the time-lapse photography device is controlled by preset or preset program instructions to collect images according to different exposure values at different times to obtain multiple frames of images ⁇ R i ⁇ , where i ranges from 1 to N, and N is multiple frames. number of images.
  • Figure 2 is a schematic diagram of obtaining N frames of images by time-lapse shooting.
  • multiple smooth exposure bracketing images are generated for each frame of images.
  • the image frame R i multiple smooth exposure bracketing images are determined for it, and the multiple smooth exposure bracketing images simultaneously include images with exposure values greater than and smaller than those of the image frame R i , where smoothing refers to What is meant is that the difference in exposure values between two images with similar exposure values is smaller than a preset threshold (for example, 1/3EV, 0.5EV, or 1EV).
  • a preset threshold for example, 1/3EV, 0.5EV, or 1EV.
  • Figure 3 shows a schematic diagram of generating smoothed multiple exposure images for each frame of image.
  • ⁇ R hi , R ni , R li ⁇ is the bracketed exposure image corresponding to the image frame R i .
  • FIG. 4 is a schematic diagram of obtaining multiple composite images based on the fusion of multiple exposure bracketing images of each frame of images.
  • YUV HDRi is the composite image corresponding to the bracketed exposure image ⁇ R hi , R ni , R li ⁇ .
  • multi-exposure fusion is used to fuse the multiple exposure bracketing images corresponding to each frame of image into a composite image, to obtain a composite image corresponding to the multiple frames of images.
  • multiple frames of images with different exposure values are collected, and then multiple smooth bracketed exposure images corresponding to each frame of image are generated.
  • the exposure images are synthesized to obtain multiple composite images, which are used for synthesizing time-lapse photography videos. Since the exposure value of each image in the multiple exposure bracketing images is smooth, the exposure values of two temporally adjacent composite images obtained by multi-exposure fusion also achieve a smooth effect, so that the composite images based on The time-lapse video generated from the images will not flicker.
  • step S10 time-lapse photography to obtain multiple frames of images includes:
  • the dynamic range [-a ev, +b ev] of the image to be acquired is determined according to the liveview histogram or using bracketing metering. For example, according to the histogram of the liveview, the proportion of highlights and shadows, it is estimated that the dynamic range of the current scene needs to be recovered, and [-a ev, +b ev] images need to be collected; or bracketing metering is used to collect image sequences with a specific exposure value, For example, collect images of [-3ev, +3ev] at 1ev intervals, and then select an appropriate ev range.
  • Figure 6 is a schematic diagram of the dynamic range of images at different times in the present invention, wherein the abscissa represents the time between shots (t0, t1, t2, t3...), the ordinate represents the dynamic range, and the "I"-shaped pattern represents is the dynamic range interval represented by the brightest and darkest of a frame of images, and the polyline “0ev” represents the middle value of the current dynamic range.
  • multiple frames of images are obtained by acquiring at reduced exposure, and the exposure value difference between the multiple frames of images is the exposure value difference between two adjacent frames of images (reduced exposure value).
  • collecting multiple frames of images within the dynamic range includes:
  • the first exposure value is -a ev (exemplarily, collecting n raw images of -a ev, and performing 3D noise reduction to obtain a pure raw image R 0 ), as i increases, the i-th exposure value is closer to +b ev, and the N-th exposure value is closer to +b ev.
  • the exposure value is closest to +b ev but not over +b ev.
  • the image quality of the obtained image frame is improved, and the delay time is finally improved.
  • the quality of the photographic video is improved, and the delay time is finally improved.
  • step S20 generating smooth multiple exposure bracketing images for each frame of the multi-frame images includes: according to the exposure parameters of the multi-frame images and the exposure parameters between the multi-frame images. Exposure value difference, resulting in smooth multiple bracketed images corresponding to each frame of image.
  • the exposure difference between the multiple frames of images may be the difference in average exposure values determined according to the difference in exposure values between two adjacent images.
  • a smooth image corresponding to each frame image is generated.
  • Multiple bracketed images include:
  • the exposure parameters include one or more of aperture, exposure time, and ISO.
  • S22 Determine multiple smoothed gain values according to the smoothed exposure parameter and the exposure value difference between the multiple frames of images.
  • the exposure value difference between multiple frames of images may be a reduced exposure value between two adjacent frames of images.
  • the smoothed high, normal, and low ev values can be obtained, that is, the gain values corresponding to the three raw images can be obtained.
  • ⁇ R i ⁇ that is Smooth exposure bracketing raw image data ⁇ R hi , R ni , R li ⁇ can be obtained (according to hardware constraints and environmental requirements, it is not necessarily three exposure bracketing images, but can be two or more).
  • the dynamic range transition in the video frame is smoother, avoiding the sudden loss of details and the problems of flickering and flickering.
  • FIG. 8 is a schematic diagram of the extended dynamic range supported by the time-lapse photography method in the present invention.
  • the "I" shape of the dotted line represents the extended dynamic range that can be supported by the time-lapse photography method of the present invention.
  • FIG. 9 is a schematic diagram of the dynamic range of the smooth multiple exposure bracketing images generated by the time-lapse photography method in the present invention.
  • three polylines L1, L2 and L3 are included, which correspond to high (high), normal (middle), and low (low) ev values, respectively.
  • the median 0ev of the dynamic range is extended to high ev, normal ev, and low ev, respectively.
  • the luminance at time t 0 is 1, and the luminance at time t 1 is 2 (greater than luminance 1), and a luminance value between luminance 1 and luminance 2 is obtained after smoothing.
  • the exposure parameters of the multi-frame images are first smoothed, and then, according to the exposure parameters after the smoothing process and the exposure value difference between the multi-frame images, multiple smoothed gain values are determined to be used to generate the corresponding gain values for each frame.
  • the multiple exposure bracketing images of the frame image expand the dynamic range of the image, bring more smoothing space for exposure smoothing, increase the dynamic range of the synthesized time-lapse video, and avoid the loss of dynamic range caused by exposure smoothing.
  • the method further includes: performing color coding processing on the smoothed multiple exposure bracketing images corresponding to each frame of images, and converting them into an image set corresponding to each frame of images in a video-compatible format.
  • the image processing chip ISP uses the image processing chip ISP to convert the raw image ⁇ R hi , R ni , R li ⁇ to YUV ⁇ YUV hi , YUV ni , YUV li ⁇ , YUV color coding format, compatible with common picture and video formats .
  • the intensity of the local tone mapping and/or the global tone map is limited; or when the color coding process is performed, the parameters during processing are obtained , and post-smoothing is performed.
  • time-lapse photography can be divided into timelapse (the camera position is fixed during the shooting process) and hyperlapse (by changing the position of each shot to bring the viewers rapid movement in time and space) Effect). Hyperlapse is more prone to flickering of video than timelapse.
  • the time-lapse photography method of the present invention further includes: performing stabilization processing on an image set corresponding to each frame of images in a video-compatible format.
  • motion estimation is performed on a reference image in an image set in a video-compatible format corresponding to each frame of image to obtain a de-distortion factor, wherein the reference image refers to the corresponding 0ev image, and each image in the image set is The images correspond to different gain values; a stabilization process is performed on a set of images in a video-compatible format corresponding to each frame of images based on the dewarping factor.
  • Multi-exposure fusion is performed on the stabilized ⁇ YUV hi , YUV ni , YUV li ⁇ respectively to obtain the composite image group ⁇ YUV HDRi ⁇ after the extended dynamic range;
  • ⁇ YUV HDRi ⁇ can be used to synthesize time-lapse video with large dynamic range.
  • an embodiment of the present invention further provides a photographing device 100, which includes:
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to cause the at least one
  • the processor can execute the time-lapse photography method described in any of the foregoing embodiments.
  • An embodiment of the present invention further provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the time-lapse photography method described in any of the foregoing embodiments are implemented.
  • the embodiment of the present invention further provides a movable platform, including: a movable body 200, and the photographing device 100 according to any of the foregoing embodiments mounted on the movable body.
  • An embodiment of the present invention provides a method for generating time-lapse photography video, which is used for a time-lapse photography video generation device (for example, a smart phone, a tablet computer, a notebook computer, etc., which is not limited in the present invention), and is mainly used to solve the problem of time-lapse photography. Flickering issue in photographic video.
  • the material required for generating the time-lapse video can be obtained through the cooperation of a camera device (eg, a video camera).
  • the camera acquires the material required for generating the time-lapse photography video according to the setting requirements, and then transmits it to a time-lapse photography video generating device (for example, a smart phone), and the smart phone executes the time-lapse photography video generation method of the present invention. to generate a time-lapse video based on the received footage.
  • a time-lapse photography video generation device for example, a smart phone
  • the time-lapse photography video generation device may be a terminal device installed with computer program software (for example, an APP for time-lapse photography video generation is installed on a smartphone).
  • the communication connection between the smartphone and the camera is realized by wired or wireless means, and the photographer can set the camera on the APP of the smartphone so that the camera collects and generates time-lapse video according to preset requirements. materials; or the photographer can set directly on the camera so that the camera collects the materials needed to generate the time-lapse video according to the preset requirements.
  • an embodiment of a time-lapse photography video generation method is provided, and the method is applied to a time-lapse photography video generation device.
  • the time-lapse photography video generation device can be a smartphone, a tablet computer, a notebook computer, etc. The invention does not limit this.
  • the time-lapse photography video generation method of the present invention includes:
  • the embodiment of the present invention firstly receives multiple frames of images with different exposure values collected by a photographic device, and then generates multiple smooth bracketed exposure images corresponding to each frame of image, and then merges each frame of images by means of multi-exposure fusion.
  • the multiple exposure bracketing images obtained by synthesizing multiple composite images are used for synthesizing time-lapse photography videos. Since the exposure value of each image in the multiple exposure bracketing images is smooth, the exposure values of two temporally adjacent composite images obtained by multi-exposure fusion also achieve a smooth effect, so that the composite images based on The time-lapse video generated from the images will not flicker.
  • the step S120 generating smooth multiple exposure bracketing images for each frame of images in the multi-frame images includes: according to the exposure parameters of the multi-frame images and the exposure value difference between the multi-frame images, generating Smoothed multiple bracketed images corresponding to each frame of image.
  • generating the smooth multiple exposure bracketing images corresponding to each frame image includes:
  • Image gain is performed on each frame of image by using the plurality of gain values to generate a plurality of smoothed bracketed exposure images corresponding to each frame of image.
  • the method further includes: performing color coding processing on the smoothed multiple exposure bracketing images corresponding to each frame of images, and converting them into an image set corresponding to each frame of images in a video-compatible format.
  • the intensity of local tone mapping and/or global tone mapping is limited when performing the color coding process.
  • the method further includes: performing stabilization processing on an image set corresponding to each frame of images in a video-compatible format.
  • performing stabilization processing on the image set corresponding to each frame of images in a video-compatible format includes:
  • a stabilization process is performed on the set of images in a video-compatible format corresponding to each frame of images based on the dewarping factor.
  • acquiring multiple frames of images by time-lapse photography includes: estimating the dynamic range of the current scene; and acquiring multiple frames of images by collecting within the dynamic range.
  • the time-lapse shooting to acquire multiple frames of images includes: acquiring multiple frames of images with reduced exposure, where the exposure value difference between the multiple frames of images is the reduced exposure value between two adjacent frames of images.
  • the estimating the dynamic range of the current scene includes: determining the dynamic range [-a ev, +b ev] of the image to be acquired according to the liveview histogram or using bracketing metering.
  • collecting multiple frames of images within the dynamic range includes:
  • the first exposure value is -a ev.
  • the ith exposure value is closer to +b ev
  • the Nth exposure value is closest to +b ev but does not exceed + b ev.
  • the multiple frames of images are multiple frames of raw images.
  • the embodiment of the present invention also provides a time-lapse photography video generation device, which is communicatively connected with the photography device, including:
  • the processor can execute the time-lapse video generation method described in any of the foregoing embodiments.
  • An embodiment of the present invention further provides a storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the method for generating a time-lapse photography video described in any of the foregoing embodiments are implemented.
  • the present invention further provides a time-lapse photography system
  • the time-lapse photography system includes the photographing device described in any of the foregoing embodiments and the time-lapse photography described in any of the foregoing embodiments that is communicatively connected to the photographing device.
  • Video generation equipment includes the photographing device described in any of the foregoing embodiments and the time-lapse photography described in any of the foregoing embodiments that is communicatively connected to the photographing device.
  • FIG. 12 is a schematic diagram of an embodiment of the time-lapse photography system of the present invention, which includes a photography device 100 (which can be installed on a movable platform 200 ) and a smartphone 300 communicatively connected to the photography device.
  • a photography device 100 which can be installed on a movable platform 200
  • a smartphone 300 communicatively connected to the photography device.
  • the time-lapse photography system includes a photography device and a time-lapse photography video generation device that is communicatively connected to the photography device.
  • the photographing device may be a camera
  • the time-lapse photography video generating device may be a smart phone, a tablet computer, a notebook computer, etc., which is not limited in the present invention.
  • cameras can be installed on movable platforms (for example, unmanned aircraft, unmanned vehicles, unmanned ships, etc.) or on fixed brackets, which are not limited in the present invention, and the photographer can use the
  • the smartphone controls the camera (for example, parameter setting, starting and stopping of photography, etc.) and processing of images and videos.
  • the movable platform is an unmanned aircraft, etc.
  • the flight control can also be performed on it through the application software installed on the smartphone terminal.
  • the time-lapse photography video generation method of the present invention is applied to a time-lapse photography system, and the method includes:
  • the photographing device acquires multiple frames of images captured by time-lapse, and sends them to the device for generating time-lapse photography, wherein the multiple frames of images respectively have different exposure values;
  • the time-lapse photography video generating device generates smooth multiple exposure bracketing images for each frame of the multi-frame images
  • the time-lapse photography video generation device obtains multiple composite images corresponding to the multiple frames of images through multiple exposure fusion based on the multiple exposure bracketing images generated corresponding to each frame of image;
  • the time-lapse photography video generating device synthesizes a time-lapse photography video according to the multiple composite images.
  • a time-lapse photography video generating device generates multiple smooth bracketed exposure images corresponding to each frame of image, and then uses multiple exposures to fuse images.
  • multiple exposure bracketing images of each frame image are synthesized to obtain multiple synthesized images, which are used for synthesizing time-lapse photography videos. Since the exposure value of each image in the multiple exposure bracketing images is smooth, the exposure values of the two temporally adjacent composite images obtained by the multi-exposure fusion also achieve a smooth effect, thus making the composite images based on multiple exposures smooth.
  • the time-lapse video generated from the images will not flicker.
  • the step S220 generating smooth multiple exposure bracketing images for each frame of the multi-frame images includes: generating, according to the exposure parameters of the multi-frame images and the exposure value difference between the multi-frame images, generating: Smoothed multiple bracketed images corresponding to each frame of image.
  • generating a plurality of smooth exposure bracketing images corresponding to each frame image includes:
  • Image gain is performed on each frame of image by using the plurality of gain values to generate a plurality of smoothed bracketed exposure images corresponding to each frame of image.
  • the method further includes: performing color coding processing on the smoothed multiple exposure bracketing images corresponding to each frame of images, and converting them into an image set corresponding to each frame of images in a video-compatible format.
  • the intensity of local tone mapping and/or global tone mapping is limited when performing the color coding process.
  • the method further includes: performing stabilization processing on an image set corresponding to each frame of images in a video-compatible format.
  • performing stabilization processing on an image set in a video-compatible format corresponding to each frame of image includes:
  • a stabilization process is performed on the set of images in a video-compatible format corresponding to each frame of images based on the dewarping factor.
  • acquiring multiple frames of images through time-lapse photography includes: a photographing device estimating a dynamic range of the current scene; and the photographing device collects multiple frames of images within the dynamic range.
  • acquiring multiple frames of images by time-lapse photography includes: a photographing device acquires multiple frames of images with reduced exposure, and the exposure value difference between the multiple frames of images is the reduced exposure value between two adjacent frames of images.
  • estimating the dynamic range of the current scene includes: the photographing device determines the dynamic range [-a ev, +b ev] of the image to be captured according to the liveview histogram or using bracketing metering.
  • the acquisition of multiple frames of images in the dynamic range includes:
  • the first exposure value is -a ev.
  • the ith exposure value is closer to +b ev
  • the Nth exposure value is closest to +b ev but does not exceed + b ev.
  • the multiple frames of images are multiple frames of raw images.
  • the device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and certainly can also be implemented by hardware.
  • the above-mentioned technical solutions can be embodied in the form of software products in essence, or the parts that make contributions to related technologies, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic disks , optical disc, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or some parts of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开一种延时摄影方法及设备、延时摄影视频生成方法及设备,其中延时摄影方法应用于摄影设备,该方法包括:延时拍摄获取多帧图像;为多帧图像中的每帧图像生成平滑的多张包围曝光图像;基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合得到对应于多帧图像的多张合成图像;根据多张合成图像合成延时摄影视频。本发明中由于多张包围曝光图像中的每张图像的曝光值是平滑的,所以使得多曝光融合得到的时间上相邻的两张合成图像的曝光值也达到的平滑的效果,从而使得基于多张合成图像所生成的延时摄影视频不会出现闪烁。

Description

延时摄影方法及设备、延时摄影视频生成方法及设备 技术领域
本发明涉及摄影技术领域,尤其涉及一种延时摄影方法及设备、延时摄影视频生成方法及设备。
背景技术
现代数码拍摄设备为人们提供了丰富的图像和视频拍摄手段,延时摄影是一种使用比拍摄帧率更高的播放帧率进行回放的拍摄和视频处理手段。
由于图像是长时间采集的,环境光等条件可能会发生改变,为了适应随着时间的变化,环境光的变化,许多相机会被设置为自动曝光模式,即实时根据场景调整合适的曝光参数,以将更多的图像细节容纳进当前拍摄的图像中。假如以正常帧率(30fps)拍摄时,拍摄图像之间的时间间隔很短,环境条件通常变化不大,曝光参数过渡平滑。但如果降低拍摄帧率,即增加拍摄图像之间的时间间隔,相邻图像的拍摄时的环境条件可能会发生很大的变化,相机的自动曝光模式可能会对拍摄时的曝光参数做出较大的改动。当以高帧率回放具有明显不同曝光参数的连续图像时,将会出现视频的闪烁。
发明内容
本发明实施例提供一种延时摄影方法及设备、延时摄影视频生成方法及设备,用于至少解决上述技术问题之一。
第一方面,本发明实施例提供一种延时摄影方法,应用于摄影设备,所述方法包括:
延时拍摄获取多帧图像,其中,所述多帧图像分别具有不同的曝光值;
为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像;
基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合 得到对应于所述多帧图像的多张合成图像;
根据所述多张合成图像合成延时摄影视频。
第二方面,本发明实施例提供一种延时摄影视频生成方法,应用于与摄影设备通信连接的延时摄影视频生成设备,所述方法包括:
自所述摄影设备获取延时拍摄的多帧图像,其中,所述多帧图像分别具有不同的曝光值;
为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像;
基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合,得到对应于所述多帧图像的多张合成图像;
根据所述多张合成图像合成延时摄影视频。
第三方面,本发明实施例还提供一种摄影设备,其包括:
至少一个处理器,
以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述任一实施例所述的延时摄影方法。
第四方面,本发明实施例还提供一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现前述任一实施例所述的延时摄影方法的步骤。
第五方面,本发明实施例还提供一种可移动平台,包括:
可移动本体,和
安装在所述可移动本体上的根据前述任一实施例所述的摄影设备。
第六方面,本发明实施例还提供一种延时摄影视频生成设备,其与摄影设备通信连接,包括:
至少一个处理器,
以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述任一实施例所述的延时摄影视频生成方法。
第七方面,本发明实施例还提供一种存储介质,其上存储有计算机程 序,其特征在于,该程序被处理器执行时实现前述任一实施例所述的延时摄影视频生成方法的步骤。
第八方面,本发明实施例还提供一种延时摄影系统,包括:前述任一实施例所述的摄影设备和与该摄影设备通信连接的前述任一实施例所述的延时摄影视频生成设备。
本发明实施例的有益效果在于:首先采集了多帧分别具有不同曝光值的图像,然后再生成对应于每帧图像的平滑的多张包围曝光图像,然后再通过多曝光融合的方式将每帧图像的多张包围曝光图像进行合成得到多张合成图像,用于合成延时摄影视频。由于多张包围曝光图像中的每张图像的曝光值是平滑的,所以使得多曝光融合得到的时间上相邻的两张合成图像的曝光值也达到平滑的效果,从而使得基于多张合成图像所生成的延时摄影视频不会出现闪烁。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明的延时摄影方法的一实施例的流程图;
图2为本发明中延时拍摄获取N帧图像的示意图;
图3为本发明中为每帧图像生成平滑的多张曝光图像的示意图;
图4为本发明中基于每帧图像的多张包围曝光图像融合得到多张合成图像的示意图;
图5为本发明的延时摄影方法的另一实施例的流程图;
图6为本发明中的不同时刻的图像的动态范围示意图;
图7为本发明的延时摄影方法的另一实施例的流程图;
图8为本发明中的延时摄影方法所支持扩展的动态范围示意图;
图9为本发明中的延时摄影方法生成的平滑的多张包围曝光图像的动态范围示意图;
图10a为本发明的一种摄影设备的实施例的示意图;
图10b为本发明的一种可移动平台的实施例的示意图;
图11为本发明的延时摄影视频生成方法的一实施例的流程图;
图12为本发明的延时摄影系统的一实施例的示意图;
图13为本发明的延时摄影方法的另一实施例的流程图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。
本发明可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、元件、数据结构等等。也可以在分布式计算环境中实践本发明,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
在本发明中,“模块”、“装置”、“系统”等指应用于计算机的相关实体,如硬件、硬件和软件的组合、软件或执行中的软件等。详细地说,例如,元件可以、但不限于是运行于处理器的过程、处理器、对象、可执行元件、执行线程、程序和/或计算机。还有,运行于服务器上的应用程序或脚本程序、服务器都可以是元件。一个或多个元件可在执行的过程和/或线程中,并且元件可以在一台计算机上本地化和/或分布在两台或多台计算机之间,并可以由各种计算机可读介质运行。元件还可以根据具有一个或多个数据包的信号,例如,来自一个与本地系统、分布式系统中另一元件交互的,和/或在因特网的网络通过信号与其它系统交互的数据的信号通过本地和/或远程过程来进行通信。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术 语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”,不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本发明实施例提供了一种延时摄影方法,主要用于解决延时摄影视频中存在的闪烁问题。示例性地,整个延时摄影方法的过程可以是在摄影设备端完成的,例如,摄影人员通过在摄像机端进行设置,使得摄像机执行本发明的延时摄影方法得到延时摄影视频,整个摄影过程、图像处理过程以及延时摄影视频的合成过程均在摄像机端完成。
如图1所示提供了一种延时摄影方法的实施例,该方法应用于摄影设备,该摄影设备可以是摄像机、可穿戴电子设备、手机、平板电脑、笔记本电脑等,本发明对此不作限制。示例性地,摄影设备可以安装在可移动平台上使用,该可移动平台可以是无人机(无人飞机或者无人车或者无人船等)、手持云台等,本发明对此不作限定。
如图1所示,在本发明的延时摄影方法的一实施例中包括以下方法步骤:
S10、延时拍摄获取多帧图像,其中,所述多帧图像分别具有不同的曝光值。
示例性地,通过预先设置或者预设程序指令控制延时摄影设备在不同时刻按照不同的曝光值采集图像从而获取多帧图像{R i},其中,i取值1至N,N是多帧图像的数量。如图2所示为延时拍摄获取N帧图像的示意图。
S20、为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像。
示例性地,在步骤S10中获取多帧图像之后,为其中的每一帧图像生成多张平滑的包围曝光图像。例如,对于图像帧R i,为其确定多张平滑的包围曝光图像,这多张平滑的包围曝光图像中同时包含了曝光值大于和小于图像帧R i的曝光值的图像,其中,平滑指的是曝光值大小接近的两张图 像之间的曝光值的差值小于预设阈值(例如,1/3EV、0.5EV或者1EV)。
如图3所示为为每帧图像生成平滑的多张曝光图像的示意图。其中,{R hi,R ni,R li}是对应于图像帧R i的包围曝光图像。
S30、基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合得到对应于所述多帧图像的多张合成图像。
如图4所示为基于每帧图像的多张包围曝光图像融合得到多张合成图像的示意图。其中,YUV HDRi是对应于包围曝光图像{R hi,R ni,R li}的合成图像。
示例性地,在步骤S20中为每帧图像生成了平滑的多张包围曝光图像之后采用多曝光融合将对应于每帧图像的多张包围曝光图像融合为合成图像,得到对应于多帧图像的多张合成图像。
S40、根据所述多张合成图像合成延时摄影视频。
本发明实施例首先采集了多帧分别具有不同曝光值的图像,然后再生成对应于每帧图像的平滑的多张包围曝光图像,然后再通过多曝光融合的方式将每帧图像的多张包围曝光图像进行合成得到多张合成图像,用于合成延时摄影视频。由于多张包围曝光图像中的每张图像的曝光值是平滑的,所以使得多曝光融合得到的时间上相邻的两张合成图像的曝光值也达到的平滑的效果,从而使得基于多张合成图像所生成的延时摄影视频不会出现闪烁。
可选的,延时拍摄获取多帧图像为多帧raw图像。如图5所示在本发明的延时摄影方法的一实施例中,步骤S10延时拍摄获取多帧图像包括:
S11、估计当前场景的动态范围。
示例性地,根据liveview直方图或者使用包围测光确定待采集图像的动态范围[-a ev,+b ev]。例如,根据liveview的直方图,高光和暗部占比,估计需要恢复当前场景的动态范围需要采集[-a ev,+b ev]图像;或者使用包围测光,以特定的曝光值采集图像序列,例如,以1ev为间隔采集[-3ev,+3ev]的图像,再选择合适的ev范围。
如图6所示为本发明中的不同时刻的图像的动态范围示意图,其中横坐标表示拍摄之间(t0、t1、t2、t3……),纵坐标表示动态范围,“工”字形图案表示的是一帧图像最亮和最暗表示的动态范围区间,折线“0ev” 表示当前动态范围的中值。
S12、在所述动态范围内采集得到多帧图像。
示例性地,以降曝光采集得到多帧图像,所述多帧图像之间的曝光值差异为相邻两帧图像之间的曝光值差值(降曝光值)。
示例性地,在所述动态范围内采集得到多帧图像包括:
在第i曝光值下采集多张图像并进行降噪处理得到第i帧图像,i取值1至N,N为所述多帧图像的数量;其中,当i=1时,第1曝光值为-a ev(示例性地,采集n张-a ev的raw图像,进行3D降噪得到纯净的raw图像R 0),随着i增大,第i曝光值越接近+b ev,第N曝光值最接近+b ev但未超过+b ev。
本发明实施例中通过在每一个曝光值下采集多张raw图像,并对这多张raw图像进行降噪处理得到一帧图像,提升了所得到的图像帧的画质,最终提升了延时摄影视频的质量。
可选的,在上述实施例中步骤S20为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像包括:根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像。示例性地,多帧图像之间的曝光差异可以是根据两两相邻的图像之间的曝光值之差所确定的平均曝光值之差。
如图7所示在本发明的延时摄影方法的一实施例中,根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像包括:
S21、对多帧图像的曝光参数进行平滑处理。
示例性地,曝光参数包括光圈、曝光时间和ISO中的一种或者多种。以相邻帧图像之间的均值作为平滑后的曝光值;或者使用与播放帧率有关的滤波窗口和滤波系统进行平滑滤波。
S22、根据平滑处理后的曝光参数和多帧图像之间的曝光值差异,确定平滑后的多个增益值。
示例性地,多帧图像之间的曝光值差异可以是相邻两帧图像之间的降曝光值。根据平滑后的曝光参数和拍摄时的降曝光值,可以得出平滑后的 high,normal,low ev值,即得到这三张raw图像对应的gain值,通过对{R i}乘gain,即可得到平滑的包围曝光raw图像数据{{R hi,R ni,R li}}(根据硬件限制和环境需要,不一定是三张包围曝光图像,可以是2张或以上)。
通过使用平滑后的曝光参数通过乘gain的方式得到包围曝光图像,使得视频帧中的动态范围过渡更平滑,避免了细节的突然丢失以及忽明忽暗的问题。
S23、利用所述多个增益值对每帧图像进行图像增益,生成对应于每帧图像的平滑的多张包围曝光图像。
如图8所示为本发明中的延时摄影方法所支持扩展的动态范围示意图。其中,虚线的“工”字形表示本发明的延时摄影方法所能支持扩展的动态范围。
如图9所示为本发明中的延时摄影方法生成的平滑的多张包围曝光图像的动态范围示意图。其中,包括三条折线L1、L2和L3,分别对应于high(高),normal(中),low(低)ev值,将动态范围的中值0ev分别扩展到了high ev,normal ev,low ev。
示例性地,t 0时刻亮度为1,t 1时刻亮度为2(大于亮度1),平滑后得到介于亮度1和亮度2之间的亮度值。
本发明实施例首先对多帧图像的曝光参数进行平滑处理,然后根据平滑处理后的曝光参数和多帧图像之间的曝光值差异,确定平滑后的多个增益值以用于生成对应于每帧图像的多张包围曝光图像,扩大了图像动态范围,为曝光平滑带来更大的平滑空间,能够增大合成的延时摄影视频的动态范围,避免曝光平滑带来的动态范围损失。
可选的,在多曝光融合之前还包括:对对应于每帧图像的平滑的多张包围曝光图像进行颜色编码处理,转换成对应于每帧图像的视频兼容格式的图像集。示例性地,利用图像处理芯片ISP将raw图像{{R hi,R ni,R li}}转YUV{{YUV hi,YUV ni,YUV li}},YUV颜色编码格式,兼容通用图片和视频格式。
可选的,发明人发现ISP中的gtm(Global Tone Mapping,全局色调映射),ltm(Local Tone Mapping,局部色调映射)模块可能会导致处理 后的图像集出现闪烁。为了避免处理后的图像之间存在闪烁的问题,在进行所述颜色编码处理时,限制局部色调映射和/或全局色调映射的强度;或者在进行所述颜色编码处理时,获取处理时的参数,并进行后期平滑处理。
在实现本发明的过程中发明人发现,由于延时摄影类型的不同,也会导致不同程度的闪烁问题。例如,根据拍摄时相机位置是否发生移动,可以将延时摄影分为timelapse(拍摄过程中相机位置固定)和hyperlapse(通过改变每次拍摄时的位置,给观众带来时间和空间上的快速运动的效果)。Hyperlapse比timelapse更容易出现视频的闪烁。
可选的,为有效解决以上hyperlapse中的视频闪烁问题,在多曝光融合之前,本发明的延时摄影方法还包括:对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
示例性地,对对应于每帧图像的视频兼容格式的图像集中的基准图像进行运动估计,得到去畸变因子,其中,所述基准图像是指所对应0ev的图像,所述图像集中的各张图像对应于不同的增益值;基于所述去畸变因子对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
例如,对{YUV ni}进行运动估计(基于图像特征或IMU惯性测量单元信息),得到用于去畸变的几何变换表{meshi}(即,畸变因子);
将{meshi}应用于{{YUV hi,YUV ni,YUV li}}上进行图像增稳。
将增稳后的{{YUV hi,YUV ni,YUV li}}分别进行多曝光融合,得到扩展动态范围之后的合成图像组{YUV HDRi};
{YUV HDRi}即可用于合成大动态范围的延时摄影视频。
如图10a所示本发明实施例还提供一种摄影设备100,其包括:
至少一个处理器,
以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述任一实施例所述的延时摄影方法。
本发明实施例还提供一种存储介质,其上存储有计算机程序,其特征 在于,该程序被处理器执行时实现前述任一实施例所述的延时摄影方法的步骤。
如图10b所示本发明实施例还提供一种可移动平台,包括:可移动本体200,和安装在所述可移动本体上的根据前述任一实施例所述的摄影设备100。
本发明实施例提供了一种延时摄影视频生成方法,用于延时摄影视频生成设备(例如,智能手机、平板电脑、笔记本电脑等,本发明对此不作限制),主要用于解决延时摄影视频中存在的闪烁问题。可以通过摄像设备(例如,摄像机)的配合获取生成延时摄影视频所需的素材。示例性地,摄像机按照设置要求获取生成延时摄影视频所需的素材,然后将其传输给延时摄影视频生成设备(例如,智能手机),智能手机通过执行本发明的延时摄影视频生成方法来基于所接收到的素材生成延时摄影视频。示例性地,以上摄像机和智能手机相互配合执行的方法构成了另外一种延时摄影方法。
示例性地,延时摄影视频生成设备可以是安装了计算机程序软件的终端设备(例如,智能手机端安装了用于进行延时摄影视频生成的APP)。示例性地,智能手机与摄像机之间通过有线或者无线的方式实现通信连接,摄影人员可以是在智能手机端的APP上对摄像机进行设置以使得摄像机按照预设要求采集生成延时摄影视频所需的素材;或者摄影人员可以直接在摄像机上进行设置以使得摄像机按照预设要求采集生成延时摄影视频所需的素材。
如图11所示提供了一种延时摄影视频生成方法的实施例,该方法应用于延时摄影视频生成设备,该延时摄影视频生成设备可以是智能手机、平板电脑、笔记本电脑等,本发明对此不作限制。
如图11所示本发明的延时摄影视频生成方法,方法包括:
S110、自所述摄影设备获取延时拍摄的多帧图像,其中,所述多帧图像分别具有不同的曝光值;
S120、为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像;
S130、基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝 光融合,得到对应于所述多帧图像的多张合成图像;
S140、根据所述多张合成图像合成延时摄影视频。
本发明实施例首先接收了摄影设备采集的多帧分别具有不同曝光值的图像,然后再生成对应于每帧图像的平滑的多张包围曝光图像,然后再通过多曝光融合的方式将每帧图像的多张包围曝光图像进行合成得到多张合成图像,用于合成延时摄影视频。由于多张包围曝光图像中的每张图像的曝光值是平滑的,所以使得多曝光融合得到的时间上相邻的两张合成图像的曝光值也达到的平滑的效果,从而使得基于多张合成图像所生成的延时摄影视频不会出现闪烁。
可选的,步骤S120为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像包括:根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像。
示例性地,根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像包括:
对多帧图像的曝光参数进行平滑处理;
根据平滑处理后的曝光参数和所述多帧图像之间的曝光值差异,确定平滑后的多个增益值;
利用所述多个增益值对每帧图像进行图像增益,生成对应于每帧图像的平滑的多张包围曝光图像。
可选的,在多曝光融合之前还包括:对对应于每帧图像的平滑的多张包围曝光图像进行颜色编码处理,转换成对应于每帧图像的视频兼容格式的图像集。
可选的,在进行所述颜色编码处理时,限制局部色调映射和/或全局色调映射的强度。
可选的,在进行所述颜色编码处理时,获取处理时的参数,并进行后期平滑处理。
可选的,在多曝光融合之前,还包括:对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
可选的,对对应于每帧图像的视频兼容格式的图像集进行增稳处理包 括:
对对应于每帧图像的视频兼容格式的图像集中的基准图像进行运动估计,得到去畸变因子,其中,所述基准图像是指所对应0ev的图像,所述图像集中的各张图像对应于不同的增益值;
基于所述去畸变因子对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
可选的,延时拍摄获取多帧图像包括:估计当前场景的动态范围;在所述动态范围内采集得到多帧图像。
可选的,所述延时拍摄获取多帧图像包括:以降曝光采集得到多帧图像,所述多帧图像之间的曝光值差异为相邻两帧图像之间的降曝光值。
可选的,所述估计当前场景的动态范围包括:根据liveview直方图或者使用包围测光确定待采集图像的动态范围[-a ev,+b ev]。
可选的,在所述动态范围内采集得到多帧图像包括:
在第i曝光值下采集多张图像并进行降噪处理得到第i帧图像,i取值1至N,N为所述多帧图像的数量;
其中,当i=1时,第1曝光值为-a ev,随着i增大,第i曝光值越接近+b ev,第N曝光值最接近+b ev但未超过+b ev。
可选的,所述多帧图像为多帧raw图像。
本发明实施例还提供一种延时摄影视频生成设备,其与摄影设备通信连接,包括:
至少一个处理器,
以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述任一实施例所述的延时摄影视频生成方法。
本发明实施例还提供一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现前述任一实施例所述的延时摄影视频生成方法的步骤。
示例性地,本发明还提供一种延时摄影系统,该延时摄影系统包括前述任一实施例所述的摄影设备和与该摄影设备通信连接的前述任一实施例所述的延时摄影视频生成设备。
如图12所示为本发明的延时摄影系统的一实施例的示意图,其中,包括摄影设备100(其可以安装在可移动平台200上),与摄影设备通信连接的智能手机300。
如图13所示提供了一种延时摄影视频生成方法的实施例,该方法应用于延时摄影系统,该延时摄影系统包括摄影设备和与该摄影设备通信连接的延时摄影视频生成设备。其中,摄影设备可以是摄像机,延时摄影视频生成设备可以是智能手机、平板电脑、笔记本电脑等,本发明对此不作限制。以摄像机和智能手机为例,摄像机可以安装在可移动平台上(例如,无人飞机、无人车、无人船等)或者安装在固定支架上,本发明对此不作限定,摄影者可以通过智能手机对摄像机进行操控(例如,参数设置、摄影的开启与停止等等)以及图像视频的处理等。进一步地,当可移动平台是无人飞机等时,还可以通过智能手机端安装的应用软件对其进行飞行控制。
如图13所示本发明的延时摄影视频生成方法,该方法应用于延时摄影系统,该方法包括:
S210、摄影设备获取延时拍摄的多帧图像,并发送至延时摄影视频生成设备,其中,所述多帧图像分别具有不同的曝光值;
S220、延时摄影视频生成设备为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像;
S230、延时摄影视频生成设备基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合,得到对应于所述多帧图像的多张合成图像;
S240、延时摄影视频生成设备根据所述多张合成图像合成延时摄影视频。
本发明实施例首先通过摄影设备采集了多帧分别具有不同曝光值的图像,然后延时摄影视频生成设备再生成对应于每帧图像的平滑的多张包围曝光图像,然后再通过多曝光融合的方式将每帧图像的多张包围曝光图像进行合成得到多张合成图像,用于合成延时摄影视频。由于多张包围曝 光图像中的每张图像的曝光值是平滑的,所以使得多曝光融合得到的时间上相邻的两张合成图像的曝光值也达到的平滑的效果,从而使得基于多张合成图像所生成的延时摄影视频不会出现闪烁。
可选的,步骤S220为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像包括:根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像。
示例性地,根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像包括:
对多帧图像的曝光参数进行平滑处理;
根据平滑处理后的曝光参数和所述多帧图像之间的曝光值差异,确定平滑后的多个增益值;
利用所述多个增益值对每帧图像进行图像增益,生成对应于每帧图像的平滑的多张包围曝光图像。
可选的,在多曝光融合之前还包括:对对应于每帧图像的平滑的多张包围曝光图像进行颜色编码处理,转换成对应于每帧图像的视频兼容格式的图像集。
可选的,在进行所述颜色编码处理时,限制局部色调映射和/或全局色调映射的强度。
可选的,在进行所述颜色编码处理时,获取处理时的参数,并进行后期平滑处理。
可选的,在多曝光融合之前,还包括:对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
可选的,对对应于每帧图像的视频兼容格式的图像集进行增稳处理包括:
对对应于每帧图像的视频兼容格式的图像集中的基准图像进行运动估计,得到去畸变因子,其中,所述基准图像是指所对应0ev的图像,所述图像集中的各张图像对应于不同的增益值;
基于所述去畸变因子对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
可选的,延时拍摄获取多帧图像包括:摄影设备估计当前场景的动态范围;摄影设备在所述动态范围内采集得到多帧图像。
可选的,延时拍摄获取多帧图像包括:摄影设备以降曝光采集得到多帧图像,多帧图像之间的曝光值差异为相邻两帧图像之间的降曝光值。
可选的,估计当前场景的动态范围包括:摄影设备根据liveview直方图或者使用包围测光确定待采集图像的动态范围[-a ev,+b ev]。
可选的,在动态范围内采集得到多帧图像包括:
在第i曝光值下采集多张图像并进行降噪处理得到第i帧图像,i取值1至N,N为所述多帧图像的数量;
其中,当i=1时,第1曝光值为-a ev,随着i增大,第i曝光值越接近+b ev,第N曝光值最接近+b ev但未超过+b ev。
可选的,所述多帧图像为多帧raw图像。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作合并,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使 得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (32)

  1. 一种延时摄影方法,应用于摄影设备,所述方法包括:
    延时拍摄获取多帧图像,其中,所述多帧图像分别具有不同的曝光值;
    为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像;
    基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合得到对应于所述多帧图像的多张合成图像;
    根据所述多张合成图像合成延时摄影视频。
  2. 根据权利要求1所述的方法,其中,为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像包括:
    根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像。
  3. 根据权利要求2所述的方法,其中,根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像包括:
    对多帧图像的曝光参数进行平滑处理;
    根据平滑处理后的曝光参数和所述多帧图像之间的曝光值差异,确定平滑后的多个增益值;
    利用所述多个增益值对每帧图像进行图像增益,生成对应于每帧图像的平滑的多张包围曝光图像。
  4. 根据权利要求1所述的方法,其中,在多曝光融合之前还包括:
    对对应于每帧图像的平滑的多张包围曝光图像进行颜色编码处理,转换成对应于每帧图像的视频兼容格式的图像集。
  5. 根据权利要求4所述的方法,其中,在进行所述颜色编码处理时,限制局部色调映射和/或全局色调映射的强度。
  6. 根据权利要求4所述的方法,其中,在进行所述颜色编码处理时, 获取处理时的参数,并进行后期平滑处理。
  7. 根据权利要求4所述的方法,其中,在多曝光融合之前,还包括:对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
  8. 根据权利要求7所述的方法,其中,对对应于每帧图像的视频兼容格式的图像集进行增稳处理包括:
    对对应于每帧图像的视频兼容格式的图像集中的基准图像进行运动估计,得到去畸变因子,其中,所述基准图像是指所对应0ev的图像;
    基于所述去畸变因子对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
  9. 根据权利要求1所述的方法,其中,延时拍摄获取多帧图像包括:
    估计当前场景的动态范围;
    在所述动态范围内采集得到多帧图像。
  10. 根据权利要求9所述的方法,其中,在所述动态范围内采集得到多帧图像包括:以降曝光采集得到多帧图像,所述多帧图像之间的曝光值差异为相邻两帧图像之间的降曝光值。
  11. 根据权利要求10所述的方法,其中,所述估计当前场景的动态范围包括:
    根据liveview直方图或者使用包围测光确定待采集图像的动态范围[-a ev,+b ev]。
  12. 根据权利要求11所述的方法,其中,在所述动态范围内采集得到多帧图像包括:
    在第i曝光值下采集多张图像并进行降噪处理得到第i帧图像,i取值1至N,N为所述多帧图像的数量;
    其中,当i=1时,第1曝光值为-a ev,随着i增大,第i曝光值越接 近+b ev,第N曝光值最接近+b ev但未超过+b ev。
  13. 根据权利要求1-12中任一项所述的方法,其中,所述多帧图像为多帧raw图像。
  14. 一种延时摄影视频生成方法,应用于与摄影设备通信连接的延时摄影视频生成设备,所述方法包括:
    自所述摄影设备获取延时拍摄的多帧图像,其中,所述多帧图像分别具有不同的曝光值;
    为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像;
    基于所生成的对应于每帧图像的多张包围曝光图像,通过多曝光融合,得到对应于所述多帧图像的多张合成图像;
    根据所述多张合成图像合成延时摄影视频。
  15. 根据权利要求14所述的方法,其中,为所述多帧图像中的每帧图像生成平滑的多张包围曝光图像包括:
    根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像。
  16. 根据权利要求15所述的方法,其中,根据所述多帧图像的曝光参数和所述多帧图像之间的曝光值差异,生成对应于每帧图像的平滑的多张包围曝光图像包括:
    对多帧图像的曝光参数进行平滑处理;
    根据平滑处理后的曝光参数和所述多帧图像之间的曝光值差异,确定平滑后的多个增益值;
    利用所述多个增益值对每帧图像进行图像增益,生成对应于每帧图像的平滑的多张包围曝光图像。
  17. 根据权利要求14所述的方法,其中,在多曝光融合之前还包括:
    对对应于每帧图像的平滑的多张包围曝光图像进行颜色编码处理,转 换成对应于每帧图像的视频兼容格式的图像集。
  18. 根据权利要求17所述的方法,其中,在进行所述颜色编码处理时,限制局部色调映射和/或全局色调映射的强度。
  19. 根据权利要求17所述的方法,其中,在进行所述颜色编码处理时,获取处理时的参数,并进行后期平滑处理。
  20. 根据权利要求17所述的方法,其中,在多曝光融合之前,还包括:对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
  21. 根据权利要求20所述的方法,其中,对对应于每帧图像的视频兼容格式的图像集进行增稳处理包括:
    对对应于每帧图像的视频兼容格式的图像集中的基准图像进行运动估计,得到去畸变因子,其中,所述基准图像是指所对应0ev的图像;
    基于所述去畸变因子对对应于每帧图像的视频兼容格式的图像集进行增稳处理。
  22. 根据权利要求14所述的方法,其中,延时拍摄获取多帧图像包括:
    估计当前场景的动态范围;
    在所述动态范围内采集得到多帧图像。
  23. 根据权利要求22所述的方法,其中,所述延时拍摄获取多帧图像包括:以降曝光采集得到多帧图像,所述多帧图像之间的曝光值差异为相邻两帧图像之间的降曝光值。
  24. 根据权利要求23所述的方法,其中,所述估计当前场景的动态范围包括:
    根据liveview直方图或者使用包围测光确定待采集图像的动态范围[-a  ev,+b ev]。
  25. 根据权利要求24所述的方法,其中,在所述动态范围内采集得到多帧图像包括:
    在第i曝光值下采集多张图像并进行降噪处理得到第i帧图像,i取值1至N,N为所述多帧图像的数量;
    其中,当i=1时,第1曝光值为-a ev,随着i增大,第i曝光值越接近+b ev,第N曝光值最接近+b ev但未超过+b ev。
  26. 根据权利要求14-25中任一项所述的方法,其中,所述多帧图像为多帧raw图像。
  27. 一种摄影设备,其包括:
    至少一个处理器,
    以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-13中任一项所述的方法。
  28. 一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-13中任意一项所述方法的步骤。
  29. 一种可移动平台,包括:
    可移动本体,和
    安装在所述可移动本体上的根据权利要求27所述的摄影设备。
  30. 一种延时摄影视频生成设备,其与摄影设备通信连接,包括:
    至少一个处理器,
    以及与所述至少一个处理器通信连接的存储器,其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理 器执行,以使所述至少一个处理器能够执行权利要求14-26中任一项所述的方法。
  31. 一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求14-26中任一项所述方法的步骤。
  32. 一种延时摄影系统,包括:
    根据权利要求27所述的摄影设备;
    与所述摄影设备通信连接的根据权利要求30所述的延时摄影视频生成设备。
PCT/CN2020/131657 2020-11-26 2020-11-26 延时摄影方法及设备、延时摄影视频生成方法及设备 WO2022109897A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080067831.4A CN114514738A (zh) 2020-11-26 2020-11-26 延时摄影方法及设备、延时摄影视频生成方法及设备
PCT/CN2020/131657 WO2022109897A1 (zh) 2020-11-26 2020-11-26 延时摄影方法及设备、延时摄影视频生成方法及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/131657 WO2022109897A1 (zh) 2020-11-26 2020-11-26 延时摄影方法及设备、延时摄影视频生成方法及设备

Publications (1)

Publication Number Publication Date
WO2022109897A1 true WO2022109897A1 (zh) 2022-06-02

Family

ID=81546259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/131657 WO2022109897A1 (zh) 2020-11-26 2020-11-26 延时摄影方法及设备、延时摄影视频生成方法及设备

Country Status (2)

Country Link
CN (1) CN114514738A (zh)
WO (1) WO2022109897A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117177080A (zh) * 2023-11-03 2023-12-05 荣耀终端有限公司 视频获取方法、电子设备及计算机可读存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153699A1 (en) * 2007-12-18 2009-06-18 Sony Corporation Imaging apparatus, imaging processing method, and imaging control program
CN105323527A (zh) * 2014-05-30 2016-02-10 苹果公司 用于时移视频的曝光计量的系统和方法
CN105472236A (zh) * 2014-09-30 2016-04-06 苹果公司 具有最佳图像稳定的延时视频采集
CN105657243A (zh) * 2015-11-08 2016-06-08 乐视移动智能信息技术(北京)有限公司 防抖动的延时摄影方法和装置
JP2017041736A (ja) * 2015-08-19 2017-02-23 オリンパス株式会社 撮像装置、撮像方法
CN108432222A (zh) * 2015-12-22 2018-08-21 深圳市大疆创新科技有限公司 支持包围式成像的系统、方法和移动平台
CN110249622A (zh) * 2017-01-28 2019-09-17 微软技术许可有限责任公司 实时的语义感知的相机曝光控制
CN110636227A (zh) * 2019-09-24 2019-12-31 合肥富煌君达高科信息技术有限公司 高动态范围hdr图像合成方法及集成该方法的高速相机
CN110868544A (zh) * 2019-11-25 2020-03-06 维沃移动通信(杭州)有限公司 一种拍摄方法及电子设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153699A1 (en) * 2007-12-18 2009-06-18 Sony Corporation Imaging apparatus, imaging processing method, and imaging control program
CN105323527A (zh) * 2014-05-30 2016-02-10 苹果公司 用于时移视频的曝光计量的系统和方法
CN105472236A (zh) * 2014-09-30 2016-04-06 苹果公司 具有最佳图像稳定的延时视频采集
JP2017041736A (ja) * 2015-08-19 2017-02-23 オリンパス株式会社 撮像装置、撮像方法
CN105657243A (zh) * 2015-11-08 2016-06-08 乐视移动智能信息技术(北京)有限公司 防抖动的延时摄影方法和装置
CN108432222A (zh) * 2015-12-22 2018-08-21 深圳市大疆创新科技有限公司 支持包围式成像的系统、方法和移动平台
CN110249622A (zh) * 2017-01-28 2019-09-17 微软技术许可有限责任公司 实时的语义感知的相机曝光控制
CN110636227A (zh) * 2019-09-24 2019-12-31 合肥富煌君达高科信息技术有限公司 高动态范围hdr图像合成方法及集成该方法的高速相机
CN110868544A (zh) * 2019-11-25 2020-03-06 维沃移动通信(杭州)有限公司 一种拍摄方法及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117177080A (zh) * 2023-11-03 2023-12-05 荣耀终端有限公司 视频获取方法、电子设备及计算机可读存储介质
CN117177080B (zh) * 2023-11-03 2024-04-16 荣耀终端有限公司 视频获取方法、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN114514738A (zh) 2022-05-17

Similar Documents

Publication Publication Date Title
CN108335279B (zh) 图像融合和hdr成像
US10764496B2 (en) Fast scan-type panoramic image synthesis method and device
WO2020029732A1 (zh) 全景拍摄方法、装置和成像设备
CN106060249B (zh) 一种拍照防抖方法及移动终端
US10027909B2 (en) Imaging device, imaging method, and image processing device
US8345109B2 (en) Imaging device and its shutter drive mode selection method
JP5214476B2 (ja) 撮像装置及び画像処理方法並びにプログラム
CN106357987B (zh) 一种曝光方法和装置
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
EP3891974B1 (en) High dynamic range anti-ghosting and fusion
WO2020029679A1 (zh) 控制方法、装置、成像设备、电子设备及可读存储介质
JP2015149691A (ja) 画像補正装置、画像補正方法、及び、撮像装置
US20160088266A1 (en) Automatic image color correciton using an extended imager
WO2022109897A1 (zh) 延时摄影方法及设备、延时摄影视频生成方法及设备
CN110276714B (zh) 快速扫描式全景图图像合成方法及装置
US10447969B2 (en) Image processing device, image processing method, and picture transmission and reception system
WO2016123850A1 (zh) 终端拍照控制方法及终端
WO2015192545A1 (zh) 拍照的方法、装置及计算机存储介质
CN114449130B (zh) 一种多摄像头的视频融合方法及系统
WO2023000878A1 (zh) 拍摄方法、装置、控制器、设备和计算机可读存储介质
CN105208286A (zh) 一种模拟慢速快门的拍摄方法及装置
JP2023071409A (ja) 撮像装置およびその制御方法、プログラム
JP2015023514A (ja) 情報処理装置、撮像装置、制御方法、及びプログラム
CN110072050B (zh) 曝光参数的自适应调整方法、装置及一种拍摄设备
CN115701869A (zh) 摄影图像处理方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20962788

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20962788

Country of ref document: EP

Kind code of ref document: A1