CN113225490B - Time-delay photographing method and photographing device thereof - Google Patents

Time-delay photographing method and photographing device thereof Download PDF

Info

Publication number
CN113225490B
CN113225490B CN202010079931.0A CN202010079931A CN113225490B CN 113225490 B CN113225490 B CN 113225490B CN 202010079931 A CN202010079931 A CN 202010079931A CN 113225490 B CN113225490 B CN 113225490B
Authority
CN
China
Prior art keywords
image
time
delay
frame
lapse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010079931.0A
Other languages
Chinese (zh)
Other versions
CN113225490A (en
Inventor
邹剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010079931.0A priority Critical patent/CN113225490B/en
Publication of CN113225490A publication Critical patent/CN113225490A/en
Application granted granted Critical
Publication of CN113225490B publication Critical patent/CN113225490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention relates to a time-lapse photography method, which comprises the steps of obtaining multi-frame images acquired by a camera during video recording; extracting a first delay image with corresponding frame number from the multi-frame images according to a preset frame rate; performing high dynamic range image processing on the first time delay image to obtain a second time delay image; and synthesizing a plurality of frames of second delay images according to a time sequence to obtain a delay photographic video. According to the time-lapse photographing method, the multi-frame images acquired by the camera are extracted according to the preset frame rate, so that the first time-lapse image is obtained, the first time-lapse image is subjected to high-dynamic-range image processing, the second time-lapse image with optimized brightness is obtained, and finally the second time-lapse image is synthesized to obtain the time-lapse photographing video, so that the image quality of the time-lapse photographing video can be optimized, and different use experiences can be provided for users.

Description

Time-delay photographing method and photographing device thereof
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a delay shooting method and a shooting device of intelligent equipment.
Background
A delay photographing function is generally provided in camera applications in electronic devices such as an intelligent terminal. Time-lapse photography, which may also be referred to as time-lapse photography (time-lapse photography) or time-lapse video recording, is a technique of photographing that compresses time and reproduces a slow scene change in a short time. The video image quality obtained by the conventional time delay photographing method is not clear enough.
Disclosure of Invention
The invention provides an earphone and a control method thereof, which are used for improving the image quality of a delayed photographic video.
The first aspect of the invention discloses a time-lapse photography method, comprising the following steps:
when video is recorded, acquiring multi-frame images acquired by a camera;
extracting a first delay image with corresponding frame number from the multi-frame images according to a preset frame rate;
performing high dynamic range image processing on the first time delay image to obtain a second time delay image;
and synthesizing a plurality of frames of second delay images according to a time sequence to obtain a delay photographic video.
The second aspect of the present invention discloses a time-lapse photographing apparatus, the apparatus comprising:
the image acquisition module is used for acquiring multi-frame images acquired by the camera;
the delay frame extraction module is used for extracting a first delay image with corresponding frame number from the multi-frame images according to a preset frame rate;
the image processing module is used for carrying out high dynamic range image processing on the first delay image to obtain a second delay image;
and the video synthesis module is used for synthesizing the multiple frames of the second delay images according to the time sequence to obtain the delay photographic video.
The third aspect of the invention discloses an electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, which processor, when executing the computer program, implements the steps of the time lapse photography method as described above.
A fourth aspect of the present invention discloses a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a time lapse photography method as described above.
According to the embodiment of the invention, the delay shooting method extracts the multi-frame images acquired by the camera according to the preset frame rate, so that the first delay image is obtained, the first delay image is subjected to high dynamic range image processing, the second delay image with optimized brightness is obtained, and finally the second delay image is synthesized to obtain the delay shooting video, so that the image quality of the delay shooting video is optimized, and different use experiences are provided for users.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic flow chart of a time-lapse photography method according to the present invention;
FIG. 2 is a block diagram showing a time-lapse photographing apparatus according to the present invention;
fig. 3 is a schematic block diagram of an electronic device according to the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the first aspect of the present invention discloses a time-lapse photographing method, taking an intelligent terminal as an example, including:
s110, acquiring multi-frame images acquired by a camera during video recording;
generally, function options of time-lapse photography are set in camera applications of the intelligent terminal. After detecting that a user opens a camera application, a function option of delayed shooting is set in a preview interface displayed by the intelligent terminal. If the user is detected to select the function option, the intelligent terminal can enter a time-delay shooting mode of the camera application. Of course, one or more shooting modes such as a photo mode, a portrait mode, a panoramic mode, a video mode, or a slow motion mode may also be set in the preview interface, which is not limited in this embodiment of the present application.
After the intelligent terminal enters the time-delay shooting mode, the preview interface can display a shooting picture currently captured by the camera. Since the time-lapse video is not recorded yet, the shot screen displayed in real time in the preview interface may be referred to as a preview screen. In addition, the preview interface also comprises a recording button for delaying shooting. If the user is detected to click a recording button in the preview interface, the user is informed to execute recording operation in a time-lapse shooting mode, at this time, the intelligent terminal can continue to use the camera to collect each frame shooting picture captured by the camera, and recording time-lapse shooting video is started.
For example, the intelligent terminal may collect each frame of photographed picture according to a certain collection frequency. Taking the acquisition frequency of 30 frames per second as an example, after a user clicks a recording button in a preview interface, the intelligent terminal can acquire 30 frames of shooting pictures within 0-1 second and can also acquire 30 frames of shooting pictures within 1-2 seconds. Along with the lapse of recording time, the frames of the shooting pictures acquired by the intelligent terminal gradually accumulate, and the intelligent terminal finally can form a time-delay shooting video obtained by the time-delay shooting from the acquired shooting pictures according to a multi-frame shooting picture extracted by the frame extraction frequency.
Specifically, in other embodiments of the present invention, a user may set a resolution adjustment button on a preview interface of the intelligent terminal, so that the user may select the resolution of each acquired frame of shooting picture by himself, and specifically, the resolution adjustment button may include resolution types such as normal, high definition and super definition, so that the user may select the resolution of a camera of the intelligent terminal by himself, and it may be understood that in practical application, the user may upload a video processed by the delayed shooting method to a network platform for sharing, and each network platform may have limitations on the size and resolution of the uploaded single video, and the limitations of different platforms are different, and by setting the resolution adjustment button, the user may select an appropriate resolution by himself according to the limitations of the corresponding platform, so as to meet the use requirement of the user.
In the embodiment of the application, the intelligent terminal can copy the video stream formed by each acquired frame of shooting picture into a two-way video stream. Each path of video stream comprises each frame of shooting picture which is acquired in real time after the intelligent terminal starts the time-delay shooting recording function. Furthermore, the intelligent terminal can use one path of video stream to execute the following step S120, so as to complete the display task of the preview interface in the time-lapse photographing process. Meanwhile, the intelligent terminal can use another path of video stream to execute the following steps S130-S150, so as to complete the task of producing the delayed photographic video.
S120, transmitting the multi-frame images to a preview interface for display;
the preview interface can display each frame of shooting picture currently collected by the camera in real time, for example, a user shoots sunrise pictures through the camera, the user generally needs 3 minutes to act from the sun appearing on the horizon to the sun departing from the horizon, and then the preview interface can display the sunrise pictures shot by each frame collected by the camera in three minutes in real time; meanwhile, a prompt in shooting can be displayed in the preview interface so as to prompt the user that the user is in a recording state currently; the current recording duration may also be displayed to reflect the length of the current recording time. Specifically, the display mode may draw each frame of shooting picture currently acquired by the camera onto a display of the camera device through any one of a surface view or a surface structure view/glsurface structure view component.
S130, extracting a first time-delay image with corresponding frame number from the multi-frame images according to a preset frame rate;
the first time-delay images are acquired from all the shot images, are distinguished according to time sequence, and in a period of time area, the first time-delay images with corresponding frames are uniformly selected according to preset frame rate and the same time interval, so that the consistency of actions is ensured.
Specifically, extracting a first delayed image with a corresponding frame number from the multi-frame images according to a preset frame rate, including:
s131, acquiring a time delay multiple of time delay photography;
the time delay multiple is the time delay time of time delay photography, the longer the time delay multiple is, the slower the action of the time delay time is, and the fewer the number of images to be extracted is; the smaller the delay multiple, the shorter the delay time, the faster the action, and the larger the number of images to be extracted.
For example, when a sunrise video is shot, a normal sunrise video is shot and played because the sunrise speed is slower, the movement of the sun relative to the horizon is difficult to visually watch by naked eyes of a user, the watching effect is poor, and a time-delay shooting method is adopted, the whole sunrise process can be accelerated into a continuous play sunrise short video in the video, the movement of the sun relative to the horizon is accelerated, and the user can more intuitively watch the sunrise of the sun, wherein the acceleration of the sunrise speed is determined by time delay times, specifically, the longer the time delay times are slower, the movement of the sun in the sunrise short video relative to the horizon is slower, the time delay times are smaller, the shorter the time delay times are faster, and the movement of the sun in the sunrise short video relative to the horizon is faster.
S132, calculating the buffer frame number of the buffer according to the time delay multiple;
after multi-frame image acquisition, buffers in the application layer need to be entered for temporary storage, and the frame rate of each buffer is set in advance, for example, 30 frames/s or 60 frames/s.
Buffer frame number=frame rate/delay multiple of the buffer.
S133, extracting the image with the buffer frame number from the multi-frame image to obtain a first time delay image.
And extracting part of the multi-frame images according to the calculated buffer frame number so as to obtain a first delay image. For example, the camera shoots 120 frames of pictures as the buffer frame number, in order to realize the effect of time-lapse shooting, only 30 frames of pictures are extracted from the 120 frames of pictures at intervals, specifically, one frame of picture is extracted from three frames of pictures, namely, the first time-lapse image is obtained by extracting part of pictures in a multi-frame image.
S140, performing high dynamic range image processing on the first time delay image to obtain a second time delay image;
the high dynamic range image processor performs shading optimization processing on the image. In the field related to image technology, the requirement for images is gradually increasing, and thus, images closer to a real scene are required to be presented, meaning that the dynamic range of images is larger and is close to the dynamic range visible to the human eye, and thus, high Dynamic Range (HDR) images are introduced. Because of the characteristics of the HDR image itself, the processing of the HDR image is roughly divided into: acquisition of an HDR image, storage of the HDR image, dynamic range compression, and expansion of an LDR image into an HDR image and image rendering using the HDR image.
Specifically, the performing high dynamic range image processing on the first delayed image to obtain a second delayed image includes:
s141, performing image registration on first delay images with different multi-frame time sequences to obtain a target image;
the camera sensor generally shoots a plurality of images with different exposure degrees under the same scene, performs image registration on the images, searches the same characteristic point in the images, and two or more images with accurate matching are target images. Specifically, a corresponding algorithm is set in an image processing module of the camera, the same characteristic point in two images or multiple images is found, for example, if the difference of the positions of the two images or multiple images is not more than 1%, the same point can be considered as the definition, when the difference of the positions of the two images or multiple images is not more than 1%, the two images or multiple images are precisely matched, and correspondingly, the two images or multiple images are target images.
S142, calculating irradiance of each pixel point in the multi-frame target image;
calculating each marked pixel point according to the exposure time and the gray value of the CMOS (Complementary Metal Oxide Semiconductor ) chip to obtain the original irradiance of the corresponding pixel point, specifically, a RAW (unprocessed) image can be obtained from a target image, the RAW image is the original data of the image, for example, the exposure set by a user when photographing is-1, the data returned by the RAW image is the image exposed to-1, the irradiance is the radiant flux on the unit area of the illuminated surface, and the irradiance of each pixel point in the target image can be calculated by subtracting the exposure value from the original irradiance.
S143, synthesizing the second time-delay image according to irradiance of each pixel point in the target image of a plurality of frames. And marking the places with large brightness difference in the target image, adjusting irradiance photos with difference, and making the places with too bright dark and the places with too dark to be bright so as to synthesize a second time-lapse image. For example, by setting a corresponding algorithm in the time-lapse photographing device, comparing the brightness values of the same pixel point of the multi-frame pictures of the target image, for example, when the brightness value of the same pixel point of the multi-frame pictures exceeds the threshold designed by the algorithm, for example, 5%, the pixel point is defined as a place with large brightness difference in the target image and marked, and then the too bright place is darkened and the too dark place is darkened by the time-lapse photographing device to synthesize the second time-lapse image.
Specifically, the synthesizing the second delayed image according to irradiance of each pixel point in the target image includes:
s1431, comparing the radiance of the corresponding pixel points in the multi-frame target image to obtain a differential radiance value, for example, the radiance of the corresponding pixel points in one frame of the target image is 10, the radiance of the corresponding pixel points in zero one frame of the target image is 20, and the differential radiance value of the two frames of the target images is 10; and obtaining the difference of the radiance values of the pixel points according to the difference of the radiance, namely obtaining all the difference radiance values of the pixel points.
S1432, normalization processing is carried out according to the sum of Gaussian weights of the difference radiance values, and irradiance adjustment is obtained; and carrying out weighted average on all the radiance values to obtain reasonable adjustment radiance, wherein the adjustment radiance is an optimal adjustment value. Specifically, the normalization processing is performed according to the sum of the gaussian weights of the differential radiance values, that is, the gaussian blur processing is performed on the differential radiance values, which is a processing effect widely used in image processing software such as Adobe Photoshop, GIMP, paint. The visual effect of the image generated by the blurring technology is as if the image is observed through a ground glass, which is obviously different from the effect in the lens out-of-focus imaging effect and the common illumination shadow. Gaussian smoothing is also used in the preprocessing stage of computer vision algorithms to enhance the image effect of the image at different scale sizes. From a mathematical point of view, the gaussian blur process of an image is the convolution of the image with a normal distribution. Since the normal distribution is also called gaussian distribution, this technique is called gaussian blur. Convolving the image with a circular block blur will produce a more accurate out-of-focus imaging effect.
S1433, using the irradiance adjustment to obtain a target pixel point by using corresponding pixel points in the multi-frame target image; and the brightness of each pixel point is adjusted by using the optimized adjusting value, so that the brightness of each pixel point is more uniform and the pixels are clearer.
S1434, synthesizing the second delay image by utilizing the target pixel point. And splicing the clearer pixels to form a final second time-delay image.
And S150, synthesizing a plurality of frames of the second delay images according to a time sequence to obtain a delay photographic video. The time-delay photographic video synthesized by the second time-delay images after multi-frame optimization can be clearer, and the picture texture is stronger.
Specifically, the synthesizing the multiple frames of the second delayed images according to the time sequence to obtain the delayed photographic video includes:
s151, sequentially carrying out beautifying treatment on a plurality of frames of the second delayed images to obtain a third delayed image;
and S152, synthesizing a plurality of frames of the third delayed images according to a time sequence to obtain a delayed photographic video.
The second delayed image may also be subjected to a cosmetic treatment, such delayed photographic video being more aesthetically pleasing.
In the present invention, the high dynamic range image processing may be performed before the frame extraction operation is performed on the multi-frame image. Therefore, the display image of the preview interface is convenient to keep consistent with the image in the delayed photographic video, and the uniformity of the display image and the image in the delayed photographic video is improved.
Referring further to fig. 2, a second aspect of the present invention discloses a time-lapse photographing apparatus, the apparatus comprising: the image processing module 230 is configured to obtain a second delayed image, and synthesize a plurality of frames of the second delayed image according to a time sequence, so as to obtain a video synthesis module 240 of a delayed photographic video. The specific implementation process of the device is described in detail in the above-mentioned time-lapse photography method, so this scheme is not repeated.
In a third aspect the invention discloses an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program. The embodiment of the invention shown in fig. 3 provides a schematic structural diagram of an electronic device. Referring to fig. 3, the electronic device 90 includes: radio Frequency (RF) circuitry 910, memory 920, input unit 930, display unit 940, sensor 950, audio circuitry 960, wireless fidelity (Wireless Fidelity, wiFi) module 970, processor 980, power source 990, and so forth. Those skilled in the art will appreciate that the electronic device structure shown in fig. 3 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components. The following describes each constituent element of the electronic apparatus of the present embodiment in detail with reference to fig. 3:
the RF circuit 910 may be configured to receive and transmit signals during the process of receiving and transmitting information, and in particular, after receiving downlink information of the base station, process the downlink information for the processor 980; in addition, the data of the design uplink is sent to the base station. Typically, the RF circuitry 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 910 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
Memory 920 may be used to store user software and modules that processor 980 may perform various functional applications and data processing for the electronic device by executing the user software and modules stored in memory 920. The memory 920 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 920 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 930 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. In particular, the input unit 930 may include a touch panel 931 and other input devices 932. The touch panel 931, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (such as operations of the user on the touch panel 931 or thereabout using any suitable object or accessory such as a finger, a stylus, or the like) and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 931 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 980, and can receive commands from the processor 980 and execute them. In addition, the touch panel 931 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 930 may include other input devices 932 in addition to the touch panel 931. In particular, other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 940 may be used to display information input by a user or information provided to the user and various menus of the electronic device. The display unit 940 may include a display panel 941, and alternatively, the display panel 941 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 931 may overlay the display panel 941, and when the touch panel 931 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 980 to determine a type of touch event, and then the processor 980 provides a corresponding visual output on the display panel 941 according to the type of touch event. Although in fig. 3, the touch panel 931 and the display panel 941 are implemented as two separate components for the input and output functions of the mobile phone, in some embodiments, the touch panel 931 may be integrated with the display panel 941 to implement the input and output functions of the mobile phone.
The electronic device may also include at least one sensor 950, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 941 according to the brightness of ambient light. Audio circuitry 960, speaker 961, microphone 962 may provide an audio interface between a user and an electronic device. Audio circuit 960 may transmit the received electrical signal converted from audio data to speaker 961, where it is converted to a sound signal by speaker 961 for output; on the other hand, microphone 962 converts the collected sound signals into electrical signals, which are received by audio circuit 960 and converted into audio data, which are processed by audio data output processor 980 for transmission to, for example, another electronic device via RF circuit 910 or for output to memory 920 for further processing.
WiFi belongs to short-range wireless transmission technology, and the electronic device can provide wireless broadband internet access for users through the WiFi module 970. Although fig. 3 shows a WiFi module 970, it is understood that it does not belong to the necessary constitution of the electronic device, and can be omitted entirely as needed within the scope of not changing the essence of the invention.
Processor 980 is a control center for the electronic device, connecting the various parts of the overall handset using various interfaces and lines, performing various functions and processing data for the electronic device by running or executing user software and/or modules stored in memory 920, and invoking data stored in memory 920, thereby performing overall monitoring of the electronic device. Optionally, processor 980 may include one or more processing units; preferably, the processor 980 may be an integrated application processor, with the application processor primarily handling operating systems, user interfaces, application programs, and the like. The processor 980 may be integrated with a modem processor, which may or may not be integrated with the processor 980.
The electronic device also includes a power supply 990 (e.g., a battery) that provides power to the various components, preferably in logical communication with the processor 980 through a power management system, for managing charge, discharge, and power consumption by the power management system. Although not shown, the electronic device may further include a camera, a bluetooth module, etc., which will not be described herein.
The application program product of the user key customization method, system and readable storage medium when the mouse is offline provided in the embodiments of the present invention includes a readable storage medium storing program codes, and the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in an electronic device readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product, where the electronic device application is stored in a storage medium, and includes several instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a vehicle-mounted computer, a PDA, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
A fourth aspect of the present invention discloses a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a time lapse photography method as described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present invention is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The foregoing is a description of the embodiments of the present invention, and is not to be construed as limiting the invention, since modifications in the detailed description and the application scope will become apparent to those skilled in the art upon consideration of the teaching of the embodiments of the present invention.

Claims (10)

1. A time-lapse photography method comprising:
when video is recorded, acquiring multi-frame images acquired by a camera;
extracting a first delay image with corresponding frame number from the multi-frame images according to a preset frame rate;
performing high dynamic range image processing on the first time-delay image to obtain a second time-delay image, including: performing image registration on the multiple frames of first delay images to obtain multiple frames of target images; calculating irradiance of each pixel point in the multi-frame target image; comparing the radiance of the corresponding pixel points in the multi-frame target image to obtain a differential radiance value; carrying out Gaussian blur processing on the difference radiance value to obtain adjusted irradiance; based on the irradiance adjustment, adjusting the brightness of corresponding pixel points in the multi-frame target image to obtain target pixel points; synthesizing the second delay image by using the target pixel point;
and synthesizing a plurality of frames of second delay images according to a time sequence to obtain a delay photographic video.
2. The method of time lapse imaging of claim 1, wherein said calculating irradiance for each pixel point in the multi-frame target image comprises:
calculating irradiance of each marked pixel point according to the exposure time and the gray value of the CMOS chip to obtain original irradiance of the corresponding pixel point;
and subtracting the exposure value from the irradiance to obtain the irradiance of each pixel point in the multi-frame target image.
3. The time-lapse photography method according to claim 1, wherein extracting the first time-lapse image of the corresponding frame number from the multi-frame images at a preset frame rate comprises:
acquiring a time delay multiple of time delay photography;
calculating the buffer frame number of the buffer according to the time delay multiple;
and extracting the image with the buffer frame number from the multi-frame image to obtain a first delay image.
4. The method of time-lapse video shooting according to claim 1, wherein the synthesizing the plurality of frames of the second time-lapse images in time sequence to obtain the time-lapse video comprises: sequentially carrying out beautifying treatment on a plurality of frames of the second delayed images to obtain a third delayed image; and synthesizing a plurality of frames of the third delayed images according to the time sequence to obtain the delayed photographic video.
5. The method of time-lapse photography according to claim 1, wherein, after acquiring the multi-frame images acquired by the camera during video recording, further comprising:
and transmitting the multi-frame images to a preview interface for display.
6. The time-lapse photography method according to claim 5, wherein the preview interface is provided with a resolution adjustment button for selecting a resolution of each captured frame of the captured picture.
7. The method of time-lapse photography according to claim 1, wherein, after acquiring the multi-frame images acquired by the camera during video recording, further comprising:
and carrying out high dynamic range image processing on the multi-frame images.
8. A time-lapse imaging apparatus, the apparatus comprising:
the image acquisition module is used for acquiring multi-frame images acquired by the camera;
the delay frame extraction module is used for extracting a first delay image with corresponding frame number from the multi-frame images according to a preset frame rate;
the image processing module is configured to perform high dynamic range image processing on the first delayed image to obtain a second delayed image, and includes: performing image registration on the multiple frames of first delay images to obtain multiple frames of target images; calculating irradiance of each pixel point in the multi-frame target image; comparing the radiance of the corresponding pixel points in the multi-frame target image to obtain a differential radiance value; carrying out Gaussian blur processing on the difference radiance value to obtain adjusted irradiance; based on the irradiance adjustment, adjusting the brightness of corresponding pixel points in the multi-frame target image to obtain target pixel points; synthesizing the second delay image by using the target pixel point;
and the video synthesis module is used for synthesizing the multiple frames of the second delay images according to the time sequence to obtain the delay photographic video.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor, when executing the computer program, implements the steps of the time lapse photography method of any of claims 1 to 7.
10. A readable storage medium having stored thereon a computer program, which, when executed by a processor, implements the steps of the time lapse photography method of any one of claims 1 to 7.
CN202010079931.0A 2020-02-04 2020-02-04 Time-delay photographing method and photographing device thereof Active CN113225490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010079931.0A CN113225490B (en) 2020-02-04 2020-02-04 Time-delay photographing method and photographing device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010079931.0A CN113225490B (en) 2020-02-04 2020-02-04 Time-delay photographing method and photographing device thereof

Publications (2)

Publication Number Publication Date
CN113225490A CN113225490A (en) 2021-08-06
CN113225490B true CN113225490B (en) 2024-03-26

Family

ID=77085415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010079931.0A Active CN113225490B (en) 2020-02-04 2020-02-04 Time-delay photographing method and photographing device thereof

Country Status (1)

Country Link
CN (1) CN113225490B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676657B (en) * 2021-08-09 2023-04-07 维沃移动通信(杭州)有限公司 Time-delay shooting method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7626614B1 (en) * 2005-02-15 2009-12-01 Apple Inc. Transfer function and high dynamic range images
US8340453B1 (en) * 2008-08-29 2012-12-25 Adobe Systems Incorporated Metadata-driven method and apparatus for constraining solution space in image processing techniques
JP2017118427A (en) * 2015-12-25 2017-06-29 オリンパス株式会社 Information terminal device, imaging device, image information processing system, and image information processing method
WO2017166954A1 (en) * 2016-03-31 2017-10-05 努比亚技术有限公司 Apparatus and method for caching video frame and computer storage medium
CN107451970A (en) * 2017-07-28 2017-12-08 电子科技大学 A kind of high dynamic range images generation method based on single-frame images
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108492262A (en) * 2018-03-06 2018-09-04 电子科技大学 It is a kind of based on gradient-structure similitude without ghost high dynamic range imaging method
CN109068052A (en) * 2018-07-24 2018-12-21 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN110086985A (en) * 2019-03-25 2019-08-02 华为技术有限公司 A kind of method for recording and electronic equipment of time-lapse photography

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10284774B2 (en) * 2015-12-25 2019-05-07 Olympus Corporation Information terminal apparatus, image pickup apparatus, image-information processing system, and image-information processing method for controlling time-lapse imaging

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7626614B1 (en) * 2005-02-15 2009-12-01 Apple Inc. Transfer function and high dynamic range images
US8340453B1 (en) * 2008-08-29 2012-12-25 Adobe Systems Incorporated Metadata-driven method and apparatus for constraining solution space in image processing techniques
JP2017118427A (en) * 2015-12-25 2017-06-29 オリンパス株式会社 Information terminal device, imaging device, image information processing system, and image information processing method
WO2017166954A1 (en) * 2016-03-31 2017-10-05 努比亚技术有限公司 Apparatus and method for caching video frame and computer storage medium
CN107451970A (en) * 2017-07-28 2017-12-08 电子科技大学 A kind of high dynamic range images generation method based on single-frame images
CN108012080A (en) * 2017-12-04 2018-05-08 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108492262A (en) * 2018-03-06 2018-09-04 电子科技大学 It is a kind of based on gradient-structure similitude without ghost high dynamic range imaging method
CN109068052A (en) * 2018-07-24 2018-12-21 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN110086985A (en) * 2019-03-25 2019-08-02 华为技术有限公司 A kind of method for recording and electronic equipment of time-lapse photography

Also Published As

Publication number Publication date
CN113225490A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2019183813A1 (en) Image capture method and device
US8964060B2 (en) Determining an image capture payload burst structure based on a metering image capture sweep
CN110177221B (en) Shooting method and device for high dynamic range image
US9172888B2 (en) Determining exposure times using split paxels
US9066017B2 (en) Viewfinder display based on metering images
CN110619593B (en) Double-exposure video imaging system based on dynamic scene
WO2018228168A1 (en) Image processing method and related product
CN105809647B (en) Automatic defogging photographing method, device and equipment
CN112449120B (en) High dynamic range video generation method and device
US9087391B2 (en) Determining an image capture payload burst structure
CN108040204B (en) Image shooting method and device based on multiple cameras and storage medium
WO2020057248A1 (en) Image denoising method and apparatus, and device and storage medium
CN110930329A (en) Starry sky image processing method and device
CN112634160A (en) Photographing method and device, terminal and storage medium
CN113225490B (en) Time-delay photographing method and photographing device thereof
TW201525942A (en) Image processing method and image processing device
CN110971822A (en) Picture processing method and device, terminal equipment and computer readable storage medium
CN110913148B (en) Method for measuring illumination condition by using double cameras
CN112288657A (en) Image processing method, image processing apparatus, and storage medium
CN112669231A (en) Image processing method, image processing model training device, and image processing model training medium
JP7212128B2 (en) Image processing method, device and storage medium
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN107948549B (en) Image noise point adjusting method and device
CN113256503B (en) Image optimization method and device, mobile terminal and storage medium
CN116452437B (en) High dynamic range image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant