CN110086985B - Recording method for delayed photography and electronic equipment - Google Patents

Recording method for delayed photography and electronic equipment Download PDF

Info

Publication number
CN110086985B
CN110086985B CN201910229645.5A CN201910229645A CN110086985B CN 110086985 B CN110086985 B CN 110086985B CN 201910229645 A CN201910229645 A CN 201910229645A CN 110086985 B CN110086985 B CN 110086985B
Authority
CN
China
Prior art keywords
frame
time period
recording time
shot
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910229645.5A
Other languages
Chinese (zh)
Other versions
CN110086985A (en
Inventor
陈绍君
杨丽霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910229645.5A priority Critical patent/CN110086985B/en
Publication of CN110086985A publication Critical patent/CN110086985A/en
Priority to PCT/CN2020/079402 priority patent/WO2020192461A1/en
Application granted granted Critical
Publication of CN110086985B publication Critical patent/CN110086985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a recording method and electronic equipment for delayed photography, and relates to the technical field of terminals. The method comprises the following steps: the electronic equipment displays a preview interface of the delayed photography mode; responding to the recording operation executed by the user, and starting to record each frame of shooting picture captured by the camera by the electronic equipment; the electronic equipment extracts X frames of shot pictures from N1 frames of shot pictures collected in a first recording time period by using a first frame extracting frequency; the electronic equipment extracts Y frames of shot pictures from the N2 frames of shot pictures collected in a second recording time period by using a second frame extracting frequency, wherein the second frame extracting frequency is different from the first frame extracting frequency; in response to a recording stop operation performed by a user, the electronic device encodes the extracted M-frame captured picture into a time-lapse video.

Description

Recording method for delayed photography and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a recording method for delayed photography and an electronic device.
Background
A camera application in an electronic device such as a mobile phone is generally provided with a time-lapse photographing function. Time-lapse photography, which may also be referred to as time-lapse photography (time-lag photography) or time-lapse video recording, is a shooting technique that can compress time, and can reproduce a scene slowly changing process in a short time.
An example of the opening process of the user shooting the flower by using the delayed shooting function in the mobile phone is shown. The bud is opened about 3 days and 3 nights (about 72 hours), and after the user starts the time-delay shooting function, the mobile phone can extract the shooting frame from the shooting video at a certain frame extraction frequency. For example, the mobile phone may extract one frame every half hour, and record the slight change of the blooming motion according to the frame extraction sequence. Thus, the handset can acquire 144 picture frames after 72 hours. Furthermore, the mobile phone can play the 144 frames in sequence at a normal frame rate (e.g. 24 frames/second), so as to reproduce the flowering process of 3 days and 3 nights taken by the mobile phone within 6 seconds (i.e. 144/30-6).
Generally, the frame extraction frequency in the delayed shooting function is preset when the mobile phone leaves the factory, or the frame extraction frequency may be manually set by the user. Once the frame extraction frequency is determined, the mobile phone collects the picture frames according to the set frame extraction frequency and forms a delayed shooting video when the delayed shooting is carried out. However, when shooting a changing scene such as a blooming process and a sunrise process, the use of a fixed frame extraction frequency to extract each frame of the picture results in a small and short-lived wonderful picture with a fast change such as a blooming instant and a sunrise instant, and a large and long-lasting picture is extracted in the blooming or sunrise preparation process, so that a wonderful part in the delayed photography cannot be highlighted, and the user experience is not high.
Disclosure of Invention
The application provides a recording method and electronic equipment for delayed photography, which can record delayed photography video by using dynamically changed frame extraction frequency, so that a highlight part with higher picture change degree in the delayed photography is displayed in a key way, and the use experience of a user is improved.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a recording method for delayed photography, including: the electronic equipment displays a preview interface of the delayed photography mode; in response to a recording operation performed by a user (e.g., an operation of clicking a recording button in a preview interface, etc.), the electronic device may start recording each frame of captured pictures captured by the camera; in the recording process, the electronic equipment can extract X (X is less than N1) frames of shot pictures from N1 frames of shot pictures collected in a first recording time period by using a first frame extraction frequency; moreover, the electronic equipment can extract Y (Y is less than N2) frames of shot pictures from the N2 frames of shot pictures collected in the second recording time period by using a second frame extraction frequency (the second frame extraction frequency is different from the first frame extraction frequency); in response to a recording stop operation performed by the user (for example, an operation of clicking the recording button again in the preview interface by the user), the electronic device may encode the M-frame shot pictures (including the X-frame shot picture and the Y-frame shot picture) extracted in the recording process into the delayed shooting video.
That is to say, in the process of shooting a delayed video, the electronic device can dynamically use different frame extraction frequencies to extract the shooting pictures being recorded, so as to form the delayed video. Therefore, the time-delay shooting video manufactured by the electronic equipment can dynamically show the change of the shooting content in different recording time periods, and the shooting effect of time-delay shooting and the use experience of a user are improved.
In a possible implementation manner, after the electronic device starts to record each frame of shot picture captured by the camera, the method further includes: if the change amplitude of the shot content in the shot picture in the first recording time period is smaller than the threshold value, the electronic equipment can determine that the frame extracting frequency in the first recording time period is the first frame extracting frequency; if the change amplitude of the shot content in the shot picture in the second recording time period is detected to be larger than or equal to the threshold value, the electronic equipment can determine that the frame extracting frequency in the second recording time period is a second frame extracting frequency, and the second frame extracting frequency is larger than the first frame extracting frequency.
That is to say, the electronic device can select the corresponding frame extraction frequency to perform frame extraction according to the change condition of the shot content in the shot picture in the recording process, so as to form the delayed shooting video. When the change of the shot content in the shot picture is not obvious, which indicates that the motion speed of the shot content is slow at this time, the electronic device can extract the frame picture by using a low frame extraction frequency. When the change of the shot content in the shot picture is obvious, the motion speed of the shot content is high, the electronic equipment can extract more frame pictures by using high frame extraction frequency, and the more frame pictures can reflect the detail change of the shot content more accurately.
Illustratively, after the electronic device starts to record each frame of shot captured by the camera, the method further includes: the electronic equipment calculates the optical flow intensity between every two collected adjacent shot pictures, wherein the optical flow intensity is used for reflecting the change amplitude of the shot contents in the two adjacent shot pictures; if the detected optical flow intensity between two adjacent shot pictures in the first recording time period is smaller than a first preset value, which indicates that the scene change in the shot pictures is not obvious, the electronic equipment can determine the frame extracting frequency in the first recording time period to be a first frame extracting frequency with a smaller value; if the detected optical flow intensity between two adjacent shot pictures in the second recording time period is greater than or equal to the first preset value, which indicates that the scene change in the shot pictures is obvious, the electronic equipment can determine that the frame extraction frequency in the second recording time period is the second frame extraction frequency with a larger value.
Or, if it is detected that the N1 captured pictures in the first recording time period belong to a preset first captured scene, the electronic device may determine that the frame extraction frequency in the first recording time period is the first frame extraction frequency; if the N2 frames of shot pictures collected in the second recording time period are detected to belong to the preset second shooting scene, the electronic equipment can determine the frame extraction frequency in the second recording time period to be the second frame extraction frequency. In general, the moving speed of the subject in the first shot scene is slow, and the moving speed of the subject in the first shot scene is fast. Therefore, the finally manufactured delayed video shooting video of the electronic equipment can accurately present the dynamic process of scene change in different shooting scenes, and the shooting effect of the delayed video shooting video and the use experience of a user are improved.
Or, if it is detected that the movement speed of the shooting target in the first recording time period is less than the second preset value, the electronic device may determine that the frame extraction frequency in the first recording time period is the first frame extraction frequency; if the movement speed of the shooting target in the second recording time period is detected to be greater than or equal to a second preset value, the electronic equipment can determine the frame extraction frequency in the second recording time period to be a second frame extraction frequency, and therefore part of emphasis of rapid movement of the shooting target is displayed in the delayed shooting video.
In one possible implementation manner, before the electronic device encodes the extracted M-frame captured picture into the delayed shooting video, the method further includes: the electronic equipment extracts Z-frame shooting pictures from the N3-frame shooting pictures collected in a third recording time period by using a third frame extracting frequency (the Z-frame shooting pictures are a part of M-frame shooting pictures extracted by the electronic equipment), the third frame extracting frequency is different from the second frame extracting frequency and the first frame extracting frequency, and the third recording time period is not overlapped with the second recording time period and the first recording time period.
In a possible implementation manner, after the electronic device starts to record each frame of shot picture captured by the camera, the method further includes: the electronic equipment displays each recorded shooting picture in real time in the preview picture.
In one possible implementation manner, when the electronic device displays each frame of shooting picture being recorded in real time in the preview picture, the method further includes: the electronic equipment displays the current recording time length and the playing time length of the delayed shooting video corresponding to the recording time length in real time in the preview picture, so that a user can know the video length of the delayed shooting video in real time.
In one possible implementation, an electronic device encodes extracted M-frame captured pictures into a delayed photographic video, comprising: and the electronic equipment encodes the extracted M frames of shooting pictures into the delayed shooting video of the delayed shooting according to a preset frame rate.
In one possible implementation manner, after the electronic device encodes the extracted M-frame captured picture into the delayed shooting video, the method further includes: and responding to the operation of opening the delayed shooting video by the user, and playing the delayed shooting video according to a preset frame rate by the electronic equipment. When the delayed shooting video is played, the electronic equipment can focus on the wonderful part with higher picture change degree in the delayed shooting and recording process because more shooting pictures are extracted by the electronic equipment when the scene movement is faster.
In a second aspect, the present application provides an electronic device comprising: a touchscreen, one or more processors, one or more memories, one or more cameras, and one or more computer programs; the processor is coupled with the touch screen, the memory and the camera, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so that the electronic device executes any one of the above recording methods of time-lapse photography.
In a third aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the recording method of delayed photography according to any one of the first aspect.
In a fourth aspect, the present application provides a computer program product, which when run on an electronic device, causes the electronic device to execute the recording method of time-lapse photography according to any one of the first aspect.
It is to be understood that the electronic device according to the second aspect, the computer storage medium according to the third aspect, and the computer program product according to the fourth aspect are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Drawings
Fig. 1 is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a shooting principle provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a recording method for delayed photography according to an embodiment of the present application;
fig. 4 is a first scene schematic diagram of a recording method for delayed photography according to an embodiment of the present application;
fig. 5 is a scene schematic diagram of a recording method for delayed photography according to an embodiment of the present application;
fig. 6 is a third scene schematic diagram of a recording method for delayed photography according to an embodiment of the present application;
fig. 7 is a fourth scene schematic diagram of a recording method for delayed photography according to an embodiment of the present application;
fig. 8 is a scene schematic diagram of a recording method for delayed photography according to an embodiment of the present application;
fig. 9 is a sixth scene schematic diagram of a recording method for delayed photography according to an embodiment of the present application;
fig. 10 is a scene schematic diagram seven of a recording method of time-lapse photography according to an embodiment of the present application;
fig. 11 is a scene schematic diagram eight of a recording method for delayed photography according to an embodiment of the present application;
fig. 12 is a scene schematic diagram nine of a recording method for delayed photography according to an embodiment of the present application;
fig. 13 is a scene schematic diagram ten of a recording method for delayed photography according to an embodiment of the present application;
fig. 14 is an eleventh scene schematic diagram of a recording method of delayed photography according to an embodiment of the present application;
fig. 15 is a scene schematic diagram twelve of a recording method for delayed photography according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
For example, the recording method of the delayed photography provided in the embodiment of the present application may be applied to electronic devices such as a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a Personal Digital Assistant (PDA), a wearable electronic device, and a virtual reality device, and the embodiment of the present application does not limit the present application in any way.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor, charger, flash, camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor via an I2C interface, such that the processor 110 and the touch sensor communicate via an I2C bus interface to implement touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 may receive input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The power management module 141 may be configured to monitor performance parameters such as battery capacity, battery cycle count, battery charging voltage, battery discharging voltage, battery state of health (e.g., leakage, impedance), and the like. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include one or more filters, switches, power amplifiers, Low Noise Amplifiers (LNAs), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices that integrate one or more communication processing modules. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. In some embodiments, the handset may include 1 or N cameras, N being a positive integer greater than 1. The camera 193 may be a front camera or a rear camera. As shown in fig. 2, the camera 193 generally includes a lens (lens) and a photosensitive element (sensor), which may be any photosensitive device such as a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor).
Still as shown in fig. 2, during shooting, the reflected light of the object to be shot can generate an optical image after passing through the lens, the optical image is projected onto the photosensitive element, the photosensitive element converts the received optical Signal into an electrical Signal, and the camera 193 sends the obtained electrical Signal to a DSP (Digital Signal Processing) module for Digital Signal Processing, so as to finally obtain a frame Digital image.
Similarly, in the process of recording the video, the DSP can obtain continuous multi-frame digital images according to the shooting principle, and the continuous multi-frame digital images can form a section of video after being coded according to a certain frame rate. Due to the special physiological structure of the human eye, when the frame rate of the viewed pictures is higher than 16 frames per second (fps), the human eye considers the viewed pictures to be coherent, and this phenomenon can be referred to as visual retention. In order to ensure the consistency of video watching by the user, the mobile phone can encode a plurality of frames of digital images output by the DSP according to a certain frame rate (for example, 24fps or 30 fps). For example, if the DSP acquires 300 frames of digital images through the camera 193, the mobile phone can encode the 300 frames of digital images into a 10-second video at a preset frame rate of 30fps (300 frames/30 fps ═ 10).
One or more frames of digital images output by the DSP may be output on the mobile phone through the display screen 194, or the digital images may be stored in the internal memory 121 (or the external memory 120), which is not limited in this embodiment.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so as to enable the electronic device 100 to execute the method for intelligent contact recommendation provided in some embodiments of the present application, and various functional applications and data processing. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage area may also store one or more application programs (e.g., gallery, contacts, etc.), etc. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a nonvolatile memory, such as one or more magnetic disk storage devices, flash memory devices, Universal Flash Storage (UFS), and the like. In other embodiments, the processor 110 may cause the electronic device 100 to execute the method of intelligently recommending numbers provided in the embodiments of the present application, and various functional applications and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with one or more microphones 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like, which is not limited in this embodiment.
Of course, the electronic device 100 provided in this embodiment of the application may further include one or more devices such as a key 190, a motor 191, an indicator 192, and a SIM card interface 195, which is not limited in this embodiment of the application.
To facilitate a clear understanding of the following embodiments, a brief description of the related art is first given:
time-lapse photography, which may also be referred to as time-lapse photography or time-lapse video recording, is a photographing technique that can compress time, and can reproduce a scene slowly changing process in a short time. After the delayed shooting function is turned on, the electronic device 100 may start to record each frame of shot captured by the camera 193. The electronic device 100 may extract M (M < N) frames of captured images from N (N > 1) frames of captured images captured by the camera 193 as the delayed video for the delayed shooting at a certain frame extraction frequency. Subsequently, when the user turns on the delayed video camera, the electronic device 100 may play the extracted M frames of shot pictures at a certain frame rate, so as to reproduce scene changes in the N frames of shot pictures actually shot by the electronic device 100 through the M frames of shot pictures.
Optical flow (optical flow), is the instantaneous velocity of pixel motion of a spatially moving object on an imaging plane (e.g., a shot). When the time interval is small (e.g., between two consecutive frames in a video), the optical flow is also equivalent to the displacement of the target point. When a human eye observes a moving object, the scene of the object forms a series of continuously changing images on the retina of the human eye, and the series of continuously changing information continuously "flows over" the retina (i.e., the image plane) as if it were a "stream" of light, which is referred to as an optical flow.
The optical flow expresses the intensity of image changes, which contains information on the motion of objects between adjacent frames. For example, the position of the point a in the t-th frame captured picture is (x1, y1), and if the position of the point a in the t + 1-th frame captured picture is (x2, y2), the optical flow intensity of the point a between the t-th frame captured picture and the t + 1-th frame captured picture can be represented by a two-dimensional vector u. Wherein, u ═ (u, v) ═ x2, y2) - (x1, y 1. When the optical flow intensity is larger, the moving amplitude of the point A in the shooting picture is larger and the moving speed is faster; when the optical flow intensity is smaller, it is shown that the motion amplitude of the point a in the captured picture is smaller and the motion speed is slower.
Because each frame of shot picture is composed of a plurality of pixel points, the optical flow intensity of each pixel point between two adjacent frames can be calculated according to the method. Then, based on the optical flow intensity of each pixel point in the captured image, the electronic device 100 may determine the optical flow intensity between the t-th frame captured image and the t + 1-th frame captured image using a preset optical flow algorithm. Similarly, when the optical flow intensity is larger, it is described that the variation of the captured contents is larger in the t-th frame captured picture and the t + 1-th frame captured picture; when the optical flow intensity is smaller, it is described that the variation of the captured contents is smaller in the t-th frame captured picture and the t + 1-th frame captured picture.
In the embodiment of the present application, after the electronic device 100 starts the delayed shooting function, the electronic device 100 may extract M (M < N) frames of shot pictures from N frames of shot pictures captured by the camera 193 by using a dynamic frame extraction frequency as a video of this delayed shooting. For example, if the optical flow intensities of two adjacent captured pictures in the first 10 seconds are both smaller than the preset value, which indicates that the captured pictures captured in the first 10 seconds have a smaller degree of change, the electronic device 100 may extract the captured pictures captured in the first 10 seconds with a smaller first frame extraction frequency (e.g., extract one frame every 3 seconds). If the optical flow intensities of two adjacent frames of the captured images in the 11 th to 20 th seconds are both greater than the predetermined value, which indicates that the degree of change of the captured images captured in the 11 th to 20 th seconds is large, at this time, the electronic device 100 may extract the captured images captured in the 11 th to 20 th seconds by using a second frame extraction frequency (for example, extracting one frame every 0.5 seconds), and finally, the M frames of the captured images extracted by the electronic device 100 may form a video of the time-lapse shooting.
That is to say, in the time-lapse shooting method provided in the embodiment of the present application, the electronic device may select a corresponding frame extraction frequency to perform frame extraction according to a change condition of shooting contents in a shooting picture in a recording process, so as to form a time-lapse shooting video. When the change of the shot content in the shot picture is not obvious, which indicates that the motion speed of the shot content is slow at this time, the electronic device can extract the frame picture by using a low frame extraction frequency. When the change of the shot content in the shot picture is obvious, the motion speed of the shot content is high, the electronic equipment can extract more frame pictures by using high frame extraction frequency, and the more frame pictures can reflect the detail change of the shot content more accurately.
A mobile phone is taken as an example of the electronic device 100, and a recording method of delayed photography according to an embodiment of the present application is described in detail below, as shown in fig. 3, the method includes steps S301 to S306.
S301, in response to the recording operation executed by the user in the delayed photography mode, the mobile phone starts to collect each frame of shooting picture captured by the camera.
Generally, a camera application of a mobile phone is provided with a function option of delayed photography. As shown in fig. 4, after detecting that the user opens the camera application, a preview interface 401 displayed by the mobile phone is provided with a function option 402 for delayed shooting. If the user is detected to select this function option 402, the handset may enter a delayed photography mode for the camera application. Certainly, one or more shooting modes such as a photo mode, a portrait mode, a panorama mode, a video mode, or a slow motion mode may also be set in the preview interface 401, which is not limited in this embodiment of the application.
After the mobile phone enters the delayed shooting mode, the preview interface 401 can display a shooting picture 403 currently captured by the camera. Since recording of the delayed video has not yet started at this time, the shooting screen 403 displayed in real time on the preview interface 401 may be referred to as a preview screen. In addition, the preview interface 401 further includes a recording button 404 for delayed shooting. If it is detected that the user clicks the recording button 404 in the preview interface 401, it indicates that the user performs the recording operation in the delayed shooting mode, at this time, the mobile phone may continue to use the camera to capture each frame of shot image captured by the camera, and start recording the delayed shooting video.
For example, the mobile phone may collect each frame of shot picture according to a certain collection frequency. For example, the collection frequency is 30 frames per second, and after the user clicks the recording button 404 in the preview interface 401, the mobile phone can collect 30 shooting pictures within 0-1 second, and can also collect 30 shooting pictures within 1-2 seconds. With the lapse of recording time, the number of frames of the shot pictures collected by the mobile phone is gradually accumulated, and the mobile phone can finally form the time-delay shooting video obtained by the time-delay shooting from the collected multiple frames of shot pictures extracted according to the frame extraction frequency.
In the embodiment of the application, the mobile phone can copy the video stream formed by each collected frame of shot picture into a two-way video stream. Each path of video stream comprises each frame of shooting picture collected in real time after the mobile phone starts the delayed shooting and recording function. Furthermore, the mobile phone can use one of the video streams to execute the following step S302, so as to complete the display task of the preview interface in the delayed shooting process. Meanwhile, the handset can use another video stream to execute the following steps S303-S305, and the task of making the delayed video camera is completed.
And S302, displaying each acquired shooting frame in real time in a preview interface by the mobile phone.
In step S302, after the mobile phone starts recording the delayed video, as shown in fig. 5, the mobile phone may display each frame of a shooting image 501 currently captured by the camera in real time in the preview interface 401. Also, the mobile phone may display a prompt 502 of shooting in the preview interface 401 to prompt the user that the user is currently in a recording state. Of course, the mobile phone may also load a corresponding animation special effect on the recording button 404 to prompt the user that the user is currently in the recording state.
In some embodiments, as also shown in fig. 5, the cell phone may also display a currently recorded duration 503 in the preview interface 401. The recorded duration 503 may reflect the length of the current recording time. In addition, the mobile phone may also display a duration 504 of the delayed video in the preview interface 401, which corresponds to the currently recorded duration 503. For example, if the mobile phone plays the finally formed delayed video at the frame rate of 30 frames/second, when the duration 504 of the delayed video is 1 second, 30 frames of pictures need to be extracted from the pictures acquired by the mobile phone. If the frame extraction frequency of the mobile phone extracting the shot pictures is one frame per second, the mobile phone can extract 30 frames of shot pictures every 30 seconds. That is, when the recorded time length 503 is 30 seconds, the corresponding time length 504 of the delayed photographic video is 1 second. In this way, it is possible to compress a video recorded for a long time into a time-lapse video for a short time by time-lapse photography, thereby reproducing a process in which a subject slowly changes in a short time.
S303, the mobile phone determines the light stream intensity of two adjacent shot pictures in the collected N shot pictures.
In step S303, as shown in fig. 6, after the mobile phone starts the recording function of the delayed photography, N frames of shooting pictures can be collected according to a certain collection frequency, where N dynamically changes with the length of the recording time. For example, after the mobile phone acquires the 1 st shot picture and the 2 nd shot picture, the optical flow strength Q (1) between the 1 st shot picture and the 2 nd shot picture can be calculated by using a preset optical flow algorithm. The greater the optical flow intensity Q (1), the greater the degree of change in the image between the 1 st frame captured picture and the 2 nd frame captured picture, that is, the faster the moving speed of the subject in the 1 st frame captured picture and the 2 nd frame captured picture. Similarly, after the mobile phone acquires the 3 rd frame shot picture, the mobile phone can continue to calculate the optical flow strength Q (2) between the 2 nd frame shot picture and the 3 rd frame shot picture; after the mobile phone acquires the 4 th frame shooting picture, the mobile phone can continue to calculate the optical flow strength Q (3) between the 3 rd frame shooting picture and the 4 th frame shooting picture, … …, and after the mobile phone acquires the N th frame shooting picture, the mobile phone can continue to calculate the optical flow strength Q (N-1) between the N-1 th frame shooting picture and the N-1 th frame shooting picture.
The light stream intensity Q calculated by the mobile phone each time can reflect the change degree of the two adjacent frames of shot pictures. Further, as shown in fig. 7, the mobile phone can determine, according to the optical flow intensity Q between two adjacent captured images, the change of the optical flow intensity Q with the recording time in the process of recording the time-lapse video, that is, a change curve of the optical flow intensity Q with the recording time (referred to as an optical flow curve in the following embodiments). The optical flow curve can reflect the change situation of the shot picture in the recording process. It should be noted that, since the recording time is generally manually operated by the user, the optical flow curve may dynamically change with the recording time.
For example, as shown in fig. 7, in the time-lapse photography in which a sunrise is photographed once, the photographed contents in the photographed images before and after sunrise generally change slowly, and thus the values of the optical flow intensity Q corresponding to the two periods before and after sunrise are relatively small. If the mobile phone extracts the shot pictures at a higher frame extraction frequency in the two periods before and after the shooting sunrise, the shot pictures before and after the sunrise in the finally formed delayed shooting video are more, and the shot pictures are not changed obviously.
The movement of the sun is relatively fast in the sunrise process, so that the value of the light stream intensity Q corresponding to the sunrise time is relatively large. If the mobile phone extracts the shot pictures at a low frame extraction frequency during the shooting sunrise, the finally formed delayed photographic video has fewer shot pictures related to sunrise and has larger variation among the shot pictures. In this regard, the mobile phone performs the following step S304, and can dynamically set the frame extraction frequency of each recording period according to the optical flow curve, so as to improve the photographing effect of the time-lapse photographing.
S304a, if the optical flow intensity corresponding to the N1 frames of the shot pictures is less than the first threshold value in the T1 time, the mobile phone extracts X frames of the shot pictures from the N1 frames of the shot pictures according to the first frame extraction frequency.
S304b, if the optical flow intensity corresponding to the N2 frames of the shot pictures is larger than the first threshold value in the T2 time, the mobile phone extracts Y frames of the shot pictures from the N2 frames of the shot pictures according to the second frame extraction frequency.
In steps S304a-S304b, the cell phone can detect the magnitude of the light flow intensity Q in the light flow curve formed in step S303 in real time. Still taking the optical flow curve shown in fig. 7 as an example, if it is detected that the optical flow intensities Q corresponding to the optical flow curve from the 1 st second to the 10 th second are all smaller than the first threshold, which indicates that the scene shot in the 1 st second to the 10 th second has not changed drastically, the mobile phone may set the frame rate in the 1 st second to the 10 th second to the first frame rate with a lower value, for example, the first frame rate is to extract 1 frame of shot picture every 2 seconds. Then, as shown in fig. 8, the cellular phone may extract 5 frames of photographed pictures at the first frame extraction frequency of extracting 1 frame every 2 seconds from the N1 frames of photographed pictures acquired in the 1 st to 10 th seconds.
Correspondingly, still taking the optical flow curve shown in fig. 7 as an example, if it is detected that the optical flow intensities Q corresponding to the optical flow curve from the 10 th to the 15 th seconds are all greater than the first threshold, which indicates that the scene shot in the 10 th to the 15 th seconds has a relatively severe change, the mobile phone may set the frame-extracting frequency from the 10 th to the 15 th seconds to a second frame-extracting frequency with a relatively high value, for example, the second frame-extracting frequency is to extract 1 frame of shot every 1 second. Then, as shown in fig. 9, the cellular phone may extract 5 frames of photographed pictures at the second frame extraction frequency of extracting 1 frame every 1 second among the N2 frames of photographed pictures acquired in the 10 th to 15 th seconds.
Of course, if the time-lapse shooting recording is not finished after 15 th second, the mobile phone may further continue to dynamically adjust the frame extraction frequency of different recording time periods according to the optical flow curve after 15 th second, which is not limited in this embodiment of the present application.
That is, when it is detected that the scene change in the shot picture is not obvious, the mobile phone may use a lower first frame extraction frequency to extract a frame picture from the shot picture collected by the mobile phone; when the scene change in the shot picture is detected to be obvious, the mobile phone can use a higher second frame extraction frequency to extract frame pictures from the shot pictures collected by the mobile phone. Therefore, when the shot scenery is obviously changed, the mobile phone can extract more shot pictures through higher second frame extraction frequency, and the more shot pictures can more accurately reflect the detail change of the shot content, so that the dynamic process when the scenery is obviously changed is mainly shown in the finally recorded delayed photographic video.
In other embodiments, the mobile phone may also dynamically adjust the frame extraction frequency in different recording time periods in units of a preset time window, as shown in (a) of fig. 10, the mobile phone may preset a time window 1001 with a length of 5 seconds. The cellular phone can calculate the average value of the optical flow intensity Q in the time window 1001, that is, the average value of the optical flow intensity Q in the 1 st to 5 th seconds, from the start position of the optical flow curve using the time window 1001. If the calculated average value is greater than the threshold value, the mobile phone can perform frame extraction on the shot picture collected in the time window 1001 according to a second higher frame extraction frequency; if the calculated average value is smaller than the threshold, the mobile phone may perform frame extraction on the captured picture collected in the time window 1001 according to a lower first frame extraction frequency. Further, as shown in fig. 10 (b), the mobile phone may move the time window 1001 to the next 5 seconds (i.e., 5 th to 10 th seconds) of the optical flow curve, and further, the mobile phone may repeat the above method to calculate the average value of the optical flow intensity Q in the time window 1001 every time the time window 1001 moves. Therefore, the mobile phone can dynamically adjust the frame extraction frequency in different recording time periods when the delayed video camera is recorded in a unit of 5 seconds. Of course, the size of the time window may be fixed or may also be dynamically changed, which is not limited in this embodiment of the application.
In addition, in the above embodiment, the dynamic adjustment method of the framing frequency is described by taking an example in which two framing frequencies (i.e., the first framing frequency and the second framing frequency) are set for the mobile phone to dynamically adjust. It is understood that the handset may also set three or more frame extraction frequencies for dynamic adjustment by the handset.
For example, as shown in fig. 11, the cell phone may set a first threshold value and a second threshold value of the optical flow intensity in advance (the second threshold value is larger than the first threshold value). When detecting that the optical flow intensity Q at the time of 0-T1 in the optical flow curve is smaller than a first threshold value, the mobile phone can extract X frames of shot pictures within the time of 0-T1 according to a first frame extraction frequency; when detecting that the optical flow intensity Q at the time of T1-T2 in the optical flow curve is greater than the first threshold and less than the second threshold, the mobile phone may extract Y frames of the shot pictures at the time of T1-T2 according to a second extraction frequency (the second extraction frequency is greater than the first extraction frequency); when detecting that the optical flow intensity Q after the time point T2 in the optical flow curve is greater than the second threshold, the mobile phone may extract the Z-frame captured picture after the time point T2 at the third extraction frequency (the third extraction frequency is greater than the second extraction frequency). That is, when the scene change in the shot image is more obvious and severe, the frame extraction frequency of the mobile phone extracting the shot image in the time-lapse photographing is higher, and correspondingly, when the scene change in the shot image is slower, the frame extraction frequency of the mobile phone extracting the shot image in the time-lapse photographing is lower.
In other embodiments, the mobile phone may also provide the user with the function of manually setting the frame drawing frequency for different recording periods. For example, before recording a delayed video shot, the user may manually enter the frame decimation frequency for different recording periods in the camera's settings interface. For example, the user may set the recording time of the entire time-lapse photography to 30 minutes, within which 30 minutes the frame extraction frequency set by the user in the first 10 minutes and the last 10 minutes is 1 frame of shot every 1 second, and within the middle 10 minutes the frame extraction frequency set by the user is 1 frame of shot every 0.5 second. Subsequently, in the process of recording the delayed video, the mobile phone may dynamically adjust the corresponding frame extraction frequency in different recording time periods according to the frame extraction frequency set by the user, which is not limited in this embodiment of the present application.
And S305, in response to the recording stopping operation executed by the user in the delayed shooting mode, the mobile phone encodes the extracted M frames of shooting pictures into a delayed shooting video, wherein the M frames of shooting pictures comprise the X frames of shooting pictures and the Y frames of shooting pictures.
When the user wishes to end this time-lapse photography, the user may click the record button 404 in the preview interface 401 again as shown in fig. 12. In response to the stop recording operation of the user clicking the recording button 404, the mobile phone may encode the extracted M frames of captured pictures into the delayed video in time sequence. The extracted M frames of shooting pictures comprise all shooting pictures extracted by the mobile phone according to corresponding frame extraction frequencies in different recording time periods.
For example, when the user clicks the record button 404, if the mobile phone extracts 300 frames of shot pictures altogether, the mobile phone may encode the 300 frames of shot pictures at a preset frame rate. For example, the preset frame rate is 30 frames/second, the mobile phone can encode the extracted 300-frame shooting picture into a time-lapse shooting video with a length of 10 seconds according to the frame rate of 30 frames/second.
In the above embodiment, the method for dynamically adjusting the frame extraction frequency during the time-lapse shooting is provided by taking the example that the mobile phone calculates the optical flow intensity of the shot picture. In other embodiments of the present application, the corresponding relationship between different shooting scenes and different frame extraction frequencies may be stored in the mobile phone in advance. For example, when the shooting scene is a scene with slow scene change, such as a sky scene, the corresponding frame extraction frequency is to extract one frame every 2 seconds; when the shooting scene is a scene with rapid scene change, such as a bird scene, the corresponding frame extraction frequency is one frame extracted every 0.5 second.
Therefore, after the mobile phone starts to record the delayed shooting video, the mobile phone can identify the shooting scene in the collected shooting picture in real time. For example, the mobile phone may identify a shooting scene in the shooting picture through an algorithm such as image analysis or AI (artificial intelligence) identification. As shown in fig. 13, if the mobile phone recognizes that the captured scenes in the 1 st to 10 th seconds are sky scenes, the mobile phone may extract the captured pictures at a first frame extraction frequency of one frame of captured pictures every 2 seconds from the captured pictures acquired in the 1 st to 10 th seconds. If the mobile phone recognizes that the shooting scenes in the 10 th to 14 th seconds are all bird scenes, the mobile phone can extract the shooting pictures at a second frame extraction frequency of extracting one frame of the shooting pictures every 0.5 second from the shooting pictures acquired in the 11 th to 14 th seconds. Finally, the mobile phone can encode the extracted shooting picture into the time-delay shooting video.
That is to say, when the mobile phone records the delayed video, the shooting picture of the delayed video can be extracted from the shooting pictures recorded by the mobile phone by using different frame extraction frequencies in different shooting scenes, so that the finally formed delayed video can accurately present the dynamic process of scene change in different shooting scenes, and the shooting effect of the delayed video and the use experience of a user are improved.
Or in other embodiments of the present application, after the mobile phone starts to record the delayed video, the mobile phone may identify the shooting target in the captured shooting picture in real time, and the moving speed of the shooting target. As shown in fig. 14, if the mobile phone recognizes that the photographing target of the photographed pictures in the 1 st to 5 th seconds is a person 1401 and the moving speed of the person 1401 is less than the preset value, the mobile phone may extract the photographed pictures at the first frame extraction frequency of extracting one frame of the photographed pictures every 2 seconds among the photographed pictures acquired in the 1 st to 5 th seconds. If the mobile phone recognizes that the shooting target of the shot pictures in the 5 th to 10 th seconds is the person 1402 and the motion speed of the person 1402 is greater than the preset value, the mobile phone can extract the shot pictures at the second frame extraction frequency of extracting one frame of the shot pictures every 0.5 second in the shot pictures collected in the 5 th to 10 th seconds. Finally, the mobile phone can encode the extracted shooting picture into the time-delay shooting video.
The shooting target may be manually selected by the user, for example, the user may select the shooting target in a shooting screen displayed on the preview interface in a manner of manual focusing. Alternatively, the shooting target may be automatically identified by the mobile phone through algorithms such as image analysis or AI identification, which is not limited in this embodiment of the present application.
It should be noted that, in the foregoing embodiment, an example of the time-lapse video shooting during a sunrise is taken, and it is to be understood that the mobile phone may also record the time-lapse video shooting during a sunset process, a blooming process, or a blooming process by using the recording method of the time-lapse video shooting, which is not limited in any way in the embodiment of the present application.
For example, after the mobile phone starts to record each frame of shot picture, the mobile phone may determine the optical flow strength Q between two adjacent frames of shot pictures collected each time. In general, the variation range of the shot content in the shot pictures is not too large during the waiting process before the flower is opened, and the optical flow intensity Q between two adjacent shot pictures is generally small. When a flower is open, the variation range of the shooting content in the shooting pictures is increased, and the optical flow intensity Q between two adjacent shooting pictures is generally larger. Then, if the optical flow strength Q between two adjacent shot pictures is detected to be smaller than a first threshold value within the time T1, which indicates that the scene change in the shot pictures is not obvious, the mobile phone may extract the X-frame shot pictures according to a first frame extraction frequency with a smaller value; if the optical flow strength Q between two adjacent shot pictures is detected to be greater than or equal to the first threshold value in the T2 time, and the change of the scenery in the shot pictures is obvious, the mobile phone can extract Y-frame shot pictures according to the second frame extraction frequency with larger value. Therefore, the finally formed delayed shooting video can accurately present the dynamic process of bud opening in the flower opening process, and the shooting effect of the delayed shooting video and the use experience of a user are improved.
Of course, when shooting a delayed video shot in a blooming process, the mobile phone may adjust the frame-extracting frequency according to other parameters capable of reflecting the change range of the shot content, in addition to adjusting the frame-extracting frequency in real time according to the optical flow strength Q between two adjacent shot frames to extract the shot frames. For example, the mobile phone may detect a movement speed of a shooting target with a flower in the shooting picture as the shooting target. Furthermore, the mobile phone can adjust the frame extraction frequency in real time according to the movement speed of the shooting target to extract the shooting picture of the time-lapse video, which is not limited in the embodiment of the present application.
That is to say, when the mobile phone records the delayed photographic video, if the motion speed of the shooting target is detected to be slow, the mobile phone can extract the shooting picture in the delayed photographic video from the shooting pictures recorded by the mobile phone by using slow frame extraction frequency; if the motion speed of the shooting target is detected to be high, the mobile phone can extract the shooting picture in the delayed shooting video from the shooting pictures recorded by the mobile phone by using high frame extraction frequency, so that the part of the shooting target moving quickly is displayed in the delayed shooting video.
In other embodiments of the present application, during the process of recording the delayed video, the mobile phone may store each frame of captured image captured by the camera without performing the above steps S303 to S305. When the user stops recording the delayed video, the mobile phone may dynamically adjust the frame extraction frequency in different recording time periods by executing the steps S303 to S304, and extract each shot picture according to the corresponding frame extraction frequency in different recording time periods to form the delayed video.
And S306, responding to the operation of opening the delayed video shooting video by the user, and playing the delayed video shooting video by the mobile phone according to a preset frame rate.
After the user finishes the time-lapse shooting, the mobile phone may store the time-lapse shooting video obtained in step S305 in an album application (also referred to as a gallery application) of the mobile phone. As shown in fig. 15, when the user previews the delayed photographic video 1501 in the album application, if it is detected that the user clicks the play button 1502 on the delayed photographic video 1501, the cellular phone may play the delayed photographic video 1501 at the frame rate (for example, 30 frames/sec) at which the delayed photographic video 1501 is encoded.
In the process of playing the delayed video camera 1501, the mobile phone extracts the shot pictures into the delayed video camera 1501 by using a higher frame extraction frequency in the shot pictures with severe scene changes, so that more shot pictures showing the scene changes in the delayed video camera 1501 are provided. For example, when the mobile phone records the delayed shooting video 1501, the frame extraction frequency is increased from extracting one frame every 1 second to extracting one frame every 0.5 second in the time period of sun rising, so that the number of the shooting pictures of sun rising which are finally extracted by the mobile phone is more, and the extracted shooting pictures can more vividly and accurately reflect the detail change of sun rising, so that the delayed shooting video 1501 can mainly show the dynamic process when the scenery is obviously changed during playing, and the recording effect of delayed shooting and the use experience of a user are improved.
As shown in fig. 16, an embodiment of the present application discloses an electronic device, which can be used to implement the method described in the above method embodiments. The electronic device may specifically include: an acquisition module 1601, a preview module 1602, an optical flow analysis module 1603, a frame extraction module 1604, an encoding module 1605, and a playback module 1606.
The acquisition module 1601 is configured to support the electronic device to execute the process S301 in fig. 3; the preview module 1602 supports the electronic device to perform the process S302 of FIG. 3; optical flow analysis module 1603 is used to support the electronic device to perform the process S303 in fig. 3; the framing module 1604 is for supporting the electronic device to perform the processes S304a, S304b, … … in fig. 3; the encoding module 1605 is used to support the electronic device to execute the process S305 in fig. 3; the play module 1606 is used to support the electronic device to execute the process S306 in fig. 3. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
As shown in fig. 17, an embodiment of the present application discloses an electronic device, including: a touch screen 1701, the touch screen 1701 comprising a touch sensitive surface 1706 and a display screen 1707; one or more processors 1702; a memory 1703; one or more cameras 1708; and one or more computer programs 1704. The various devices described above may be connected by one or more communication buses 1705. Wherein the one or more computer programs 1704 are stored in the memory 1703 and configured to be executed by the one or more processors 1702, the one or more computer programs 1704 including instructions that may be used to perform the steps of the embodiments described above.
For example, the processor 1702 may be specifically the processor 110 shown in fig. 1, the memory 1703 may be specifically the internal memory 121 and/or the external memory 120 shown in fig. 1, the display screen 1707 may be specifically the display screen 194 shown in fig. 1, the camera 1708 may be specifically the camera 193 shown in fig. 1, and the touch-sensitive surface 1706 may be specifically a touch sensor in the sensor module 180 shown in fig. 1, which is not limited in this embodiment of the present invention.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A recording method of time-lapse photography, comprising:
the electronic equipment displays a preview interface of the delayed photography mode;
responding to a recording operation executed by a user, and starting to record each frame of shooting picture captured by the camera by the electronic equipment by using the acquisition frequency;
the electronic equipment extracts X frames of shooting pictures from N1 frames of shooting pictures collected in a first recording time period by using a first frame extracting frequency, wherein X is less than N1;
the electronic equipment extracts Y frames of shot pictures from N2 frames of shot pictures collected in a second recording time period by using a second frame extracting frequency, wherein Y is less than N2, the second frame extracting frequency is different from the first frame extracting frequency, and the second recording time period is not overlapped with the first recording time period;
in response to the recording stopping operation executed by a user, the electronic equipment encodes the extracted M frames of shooting pictures into a delayed shooting video, wherein the M frames of shooting pictures comprise the X frames of shooting pictures and the Y frames of shooting pictures;
after the electronic device starts to record each frame of shooting picture captured by the camera, the method further comprises:
if the change amplitude of the shot content in the shot picture in the first recording time period is smaller than a threshold value, the electronic equipment determines that the frame extracting frequency in the first recording time period is the first frame extracting frequency;
if the change amplitude of the shot content in the shot picture in the second recording time period is detected to be larger than or equal to the threshold value, the electronic equipment determines that the frame extracting frequency in the second recording time period is the second frame extracting frequency, and the second frame extracting frequency is larger than the first frame extracting frequency.
2. The method of claim 1, further comprising, after the electronic device starts recording each frame captured by the camera:
the electronic equipment calculates the optical flow intensity between every two collected adjacent shot pictures, wherein the optical flow intensity is used for reflecting the change amplitude of the shot contents in the two adjacent shot pictures;
if it is detected that the variation amplitude of the shot content in the shot picture in the first recording time period is smaller than the threshold, the electronic device determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency, and the method includes:
if the optical flow intensity between two adjacent shot pictures in the first recording time period is detected to be smaller than a first preset value, the electronic equipment determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency;
if it is detected that the variation amplitude of the shot content in the shot picture in the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency, including:
and if the optical flow intensity between two adjacent shot pictures in the second recording time period is detected to be greater than or equal to the first preset value, the electronic equipment determines the frame extraction frequency in the second recording time period to be the second frame extraction frequency.
3. The method of claim 1,
if it is detected that the variation amplitude of the shot content in the shot picture in the first recording time period is smaller than a threshold value, the electronic device determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency, and the method includes:
if the N1 frames of shot pictures collected in the first recording time period are detected to belong to a preset first shooting scene, the electronic equipment determines that the frame extracting frequency in the first recording time period is the first frame extracting frequency;
if it is detected that the variation amplitude of the shot content in the shot picture in the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency, including:
if it is detected that the N2 frames of shot pictures collected in the second recording time period belong to a preset second shooting scene, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency.
4. The method of claim 1,
if it is detected that the variation amplitude of the shot content in the shot picture in the first recording time period is smaller than a threshold value, the electronic device determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency, and the method includes:
if the movement speed of the shooting target in the first recording time period is detected to be smaller than a second preset value, the electronic equipment determines that the frame extracting frequency in the first recording time period is the first frame extracting frequency;
if it is detected that the variation amplitude of the shot content in the shot picture in the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency, including:
and if the movement speed of the shooting target in the second recording time period is detected to be greater than or equal to the second preset value, the electronic equipment determines the frame extracting frequency in the second recording time period to be the second frame extracting frequency.
5. The method according to any one of claims 1-4, wherein before the electronic device encodes the extracted M-frame captured picture as a delayed photographic video, further comprising:
the electronic equipment extracts Z-frame shooting pictures from N3-frame shooting pictures collected in a third recording time period by using a third frame extracting frequency, wherein the third frame extracting frequency is different from the second frame extracting frequency and the first frame extracting frequency, the third recording time period is not overlapped with the second recording time period and the first recording time period, and the M-frame shooting pictures comprise the Z-frame shooting pictures.
6. The method of claim 5, further comprising, after the electronic device starts recording each frame captured by the camera:
and the electronic equipment displays each recorded shooting picture in real time in the preview picture.
7. The method according to claim 6, wherein when the electronic device displays each frame of the captured image being recorded in real time in the preview image, the method further comprises:
and the electronic equipment displays the current recording time length and the playing time length of the delayed shooting video corresponding to the recording time length in the preview picture in real time.
8. The method of claim 7, wherein the electronic device encodes the extracted M-frame captured pictures as a delayed photographic video, comprising:
and the electronic equipment encodes the extracted M frames of shooting pictures into the delayed shooting video of the delayed shooting according to a preset frame rate.
9. The method according to claim 8, wherein after the electronic device encodes the extracted M-frame captured picture as a delayed photographic video, further comprising:
and responding to the operation of opening the delayed shooting video by the user, and playing the delayed shooting video by the electronic equipment according to a preset frame rate.
10. An electronic device, comprising:
a touch screen comprising a touch sensitive surface and a display screen;
one or more processors;
one or more memories;
one or more cameras;
and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the steps of:
displaying a preview interface of the delayed photography mode;
responding to the recording operation executed by a user, and starting to record each frame of shooting picture captured by the camera by using the acquisition frequency;
extracting X frames of shot pictures from N1 frames of shot pictures collected in a first recording time period by using a first frame extracting frequency, wherein X is less than N1;
extracting Y frames of shot pictures from N2 frames of shot pictures collected in a second recording time period by using a second frame extracting frequency, wherein Y is less than N2, the second frame extracting frequency is different from the first frame extracting frequency, and the second recording time period is not overlapped with the first recording time period;
responding to the recording stopping operation executed by a user, and coding the extracted M frames of shooting pictures into a delayed shooting video, wherein the M frames of shooting pictures comprise the X frames of shooting pictures and the Y frames of shooting pictures;
after the electronic device starts to record each frame of shooting picture captured by the camera, the electronic device is further configured to execute:
if the change amplitude of the shot content in the shot picture in the first recording time period is smaller than a threshold value, determining the frame extracting frequency in the first recording time period as the first frame extracting frequency;
and if the change amplitude of the shot content in the shot picture in the second recording time period is detected to be larger than or equal to the threshold value, determining the frame extracting frequency in the second recording time period to be the second frame extracting frequency, wherein the second frame extracting frequency is larger than the first frame extracting frequency.
11. The electronic device of claim 10, wherein after the electronic device starts recording each frame captured by the camera, the electronic device is further configured to perform:
calculating the light stream intensity between every two collected adjacent frames of shot pictures, wherein the light stream intensity is used for reflecting the change amplitude of the shot contents in the two adjacent frames of shot pictures;
if it is detected that the variation amplitude of the shot content in the shot picture in the first recording time period is smaller than the threshold, the electronic device determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency, and the method specifically includes:
if the optical flow intensity between two adjacent shot pictures in the first recording time period is detected to be smaller than a first preset value, determining the frame extraction frequency in the first recording time period as the first frame extraction frequency;
if it is detected that the variation amplitude of the shot content in the shot picture in the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency, and the method specifically includes:
and if the optical flow intensity between two adjacent shot pictures in the second recording time period is detected to be greater than or equal to the first preset value, determining the frame extracting frequency in the second recording time period as the second frame extracting frequency.
12. The electronic device of claim 10,
if it is detected that the variation amplitude of the shot content in the shot picture in the first recording time period is smaller than a threshold value, the electronic device determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency, and the method specifically includes:
if it is detected that the N1 frames of shot pictures collected in the first recording time period belong to a preset first shot scene, determining the frame extraction frequency in the first recording time period as the first frame extraction frequency;
if it is detected that the variation amplitude of the shot content in the shot picture in the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency, and specifically includes:
and if the N2 frames of shot pictures collected in the second recording time period are detected to belong to a preset second shooting scene, determining the frame extracting frequency in the second recording time period as the second frame extracting frequency.
13. The electronic device of claim 10,
if it is detected that the variation amplitude of the shot content in the shot picture in the first recording time period is smaller than a threshold value, the electronic device determines that the frame extraction frequency in the first recording time period is the first frame extraction frequency, and the method specifically includes:
if the movement speed of the shooting target in the first recording time period is detected to be smaller than a second preset value, determining the frame extracting frequency in the first recording time period as the first frame extracting frequency;
if it is detected that the variation amplitude of the shot content in the shot picture in the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame extraction frequency in the second recording time period is the second frame extraction frequency, and specifically includes:
and if the movement speed of the shooting target in the second recording time period is detected to be greater than or equal to the second preset value, determining the frame extracting frequency in the second recording time period to be the second frame extracting frequency.
14. The electronic device of any of claims 10-13, wherein prior to the electronic device encoding the extracted M-frame captured picture as a delayed photographic video, the electronic device is further configured to perform:
and extracting Z-frame shot pictures from the N3-frame shot pictures collected in a third recording time period by using a third frame extracting frequency, wherein the third frame extracting frequency is different from the second frame extracting frequency and the first frame extracting frequency, the third recording time period is not overlapped with the second recording time period and the first recording time period, and the M-frame shot pictures comprise the Z-frame shot pictures.
15. The electronic device of claim 14, wherein after the electronic device starts recording each frame captured by the camera, the electronic device is further configured to perform:
and the electronic equipment displays each recorded shooting picture in real time in the preview picture.
16. The electronic device according to claim 15, wherein when the electronic device displays each frame of shooting picture being recorded in real time in the preview picture, the electronic device is further configured to perform:
and displaying the current recording time length and the playing time length of the delayed photographic video corresponding to the recording time length in real time in the preview picture.
17. The electronic device according to claim 16, wherein the electronic device encodes the extracted M-frame captured picture as a delayed video, and specifically comprises:
and coding the extracted M frames of shot pictures into the delayed shooting video of the delayed shooting according to a preset frame rate.
18. The electronic device of claim 17, wherein after the electronic device encodes the extracted M-frame captured picture as a delayed photographic video, the electronic device is further configured to perform:
and responding to the operation of opening the delayed shooting video by the user, and playing the delayed shooting video according to a preset frame rate.
19. A computer-readable storage medium having instructions stored thereon, which, when run on an electronic device, cause the electronic device to perform the recording method of delayed photography according to any one of claims 1 to 9.
CN201910229645.5A 2019-03-25 2019-03-25 Recording method for delayed photography and electronic equipment Active CN110086985B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910229645.5A CN110086985B (en) 2019-03-25 2019-03-25 Recording method for delayed photography and electronic equipment
PCT/CN2020/079402 WO2020192461A1 (en) 2019-03-25 2020-03-14 Recording method for time-lapse photography, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910229645.5A CN110086985B (en) 2019-03-25 2019-03-25 Recording method for delayed photography and electronic equipment

Publications (2)

Publication Number Publication Date
CN110086985A CN110086985A (en) 2019-08-02
CN110086985B true CN110086985B (en) 2021-03-30

Family

ID=67413619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910229645.5A Active CN110086985B (en) 2019-03-25 2019-03-25 Recording method for delayed photography and electronic equipment

Country Status (2)

Country Link
CN (1) CN110086985B (en)
WO (1) WO2020192461A1 (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110086985B (en) * 2019-03-25 2021-03-30 华为技术有限公司 Recording method for delayed photography and electronic equipment
CN112532859B (en) * 2019-09-18 2022-05-31 华为技术有限公司 Video acquisition method and electronic equipment
CN112532857B (en) * 2019-09-18 2022-04-12 华为技术有限公司 Shooting method and equipment for delayed photography
CN112887583B (en) * 2019-11-30 2022-07-22 华为技术有限公司 Shooting method and electronic equipment
CN111294509A (en) * 2020-01-22 2020-06-16 Oppo广东移动通信有限公司 Video shooting method, device, terminal and storage medium
CN113225490B (en) * 2020-02-04 2024-03-26 Oppo广东移动通信有限公司 Time-delay photographing method and photographing device thereof
CN111240184B (en) * 2020-02-21 2021-12-31 华为技术有限公司 Method for determining clock error, terminal and computer storage medium
CN111526281B (en) * 2020-03-25 2021-06-25 东莞市至品创造数码科技有限公司 Method and device for calculating time length of delayed photographic image
CN111464760A (en) * 2020-05-06 2020-07-28 Oppo(重庆)智能科技有限公司 Dynamic image generation method and device and terminal equipment
CN113747049B (en) * 2020-05-30 2023-01-13 华为技术有限公司 Shooting method and equipment for delayed photography
WO2022021128A1 (en) * 2020-07-29 2022-02-03 深圳市大疆创新科技有限公司 Image processing method, electronic device, camera and readable storage medium
CN114615421B (en) * 2020-12-07 2023-06-30 华为技术有限公司 Image processing method and electronic equipment
CN112637528B (en) * 2020-12-21 2023-12-29 维沃移动通信有限公司 Picture processing method and device
CN112702607B (en) * 2020-12-25 2022-11-22 深圳大学 Intelligent video compression method and device based on optical flow decision
CN114827443A (en) * 2021-01-29 2022-07-29 深圳市万普拉斯科技有限公司 Video frame selection method, video delay processing method and device and computer equipment
CN113726949B (en) * 2021-05-31 2022-08-26 荣耀终端有限公司 Video processing method, electronic device and storage medium
CN113810596B (en) * 2021-07-27 2023-01-31 荣耀终端有限公司 Time-delay shooting method and device
CN113691721B (en) * 2021-07-28 2023-07-18 浙江大华技术股份有限公司 Method, device, computer equipment and medium for synthesizing time-lapse photographic video
CN113643728B (en) * 2021-08-12 2023-08-22 荣耀终端有限公司 Audio recording method, electronic equipment, medium and program product
CN115776532B (en) * 2021-09-07 2023-10-20 荣耀终端有限公司 Method for capturing images in video and electronic equipment
CN113556473B (en) * 2021-09-23 2022-02-08 深圳市天和荣科技有限公司 Shooting method and device for flower blooming process, electronic equipment and storage medium
CN114390236A (en) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 Video processing method, video processing device, computer equipment and storage medium
CN114679607B (en) * 2022-03-22 2024-03-05 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN114827477B (en) * 2022-05-26 2024-03-29 维沃移动通信有限公司 Method, device, electronic equipment and medium for time-lapse photography
CN116708751B (en) * 2022-09-30 2024-02-27 荣耀终端有限公司 Method and device for determining photographing duration and electronic equipment
CN115988262A (en) * 2022-12-14 2023-04-18 北京有竹居网络技术有限公司 Method, apparatus, device and medium for video processing
CN117714899A (en) * 2023-07-28 2024-03-15 荣耀终端有限公司 Shooting method for time-lapse shooting and electronic equipment
CN117714850A (en) * 2023-08-31 2024-03-15 荣耀终端有限公司 Time-delay photographing method and related equipment thereof

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9247098B2 (en) * 2013-04-09 2016-01-26 Here Global B.V. Automatic time lapse capture
CN104539864B (en) * 2014-12-23 2018-02-02 小米科技有限责任公司 The method and apparatus for recording image
US9900499B2 (en) * 2015-05-21 2018-02-20 Apple Inc. Time lapse user interface enhancements
KR102527811B1 (en) * 2015-12-22 2023-05-03 삼성전자주식회사 Apparatus and method for generating time lapse image
CN105959539A (en) * 2016-05-09 2016-09-21 南京云恩通讯科技有限公司 Time-lapse photography method for automatically determining delay rate
US10187607B1 (en) * 2017-04-04 2019-01-22 Gopro, Inc. Systems and methods for using a variable capture frame rate for video capture
CN107197162B (en) * 2017-07-07 2020-11-13 盯盯拍(深圳)技术股份有限公司 Shooting method, shooting device, video storage equipment and shooting terminal
CN107396019B (en) * 2017-08-11 2019-05-17 维沃移动通信有限公司 A kind of slow motion video method for recording and mobile terminal
CN109068052B (en) * 2018-07-24 2020-11-10 努比亚技术有限公司 Video shooting method, mobile terminal and computer readable storage medium
CN110086985B (en) * 2019-03-25 2021-03-30 华为技术有限公司 Recording method for delayed photography and electronic equipment

Also Published As

Publication number Publication date
WO2020192461A1 (en) 2020-10-01
CN110086985A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110086985B (en) Recording method for delayed photography and electronic equipment
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN112532857B (en) Shooting method and equipment for delayed photography
US11889180B2 (en) Photographing method and electronic device
CN110381276B (en) Video shooting method and electronic equipment
CN113727016A (en) Shooting method and electronic equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN113810596B (en) Time-delay shooting method and device
CN115086567B (en) Time delay photographing method and device
WO2023284591A1 (en) Video capture method and apparatus, electronic device, and storage medium
CN112532903B (en) Intelligent video recording method, electronic equipment and computer readable storage medium
CN118042280B (en) Image processing method, electronic device, computer program product, and storage medium
CN113726949B (en) Video processing method, electronic device and storage medium
CN113593567B (en) Method for converting video and sound into text and related equipment
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN115412678B (en) Exposure processing method and device and electronic equipment
CN114466238A (en) Frame demultiplexing method, electronic device and storage medium
CN115297269B (en) Exposure parameter determination method and electronic equipment
CN116095509B (en) Method, device, electronic equipment and storage medium for generating video frame
RU2789447C1 (en) Method and apparatus for multichannel video recording
CN115131692A (en) Heart rate detection method and device
CN115209062A (en) Image processing method and device
CN117998193A (en) Recommendation method of shooting function and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant