WO2020192461A1 - Recording method for time-lapse photography, and electronic device - Google Patents

Recording method for time-lapse photography, and electronic device Download PDF

Info

Publication number
WO2020192461A1
WO2020192461A1 PCT/CN2020/079402 CN2020079402W WO2020192461A1 WO 2020192461 A1 WO2020192461 A1 WO 2020192461A1 CN 2020079402 W CN2020079402 W CN 2020079402W WO 2020192461 A1 WO2020192461 A1 WO 2020192461A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
electronic device
shooting
time period
recording
Prior art date
Application number
PCT/CN2020/079402
Other languages
French (fr)
Chinese (zh)
Inventor
陈绍君
杨丽霞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020192461A1 publication Critical patent/WO2020192461A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • This application relates to the field of terminal technology, and in particular to a recording method and electronic equipment for time-lapse photography.
  • Time-lapse photography can also be called time-lapse photography or time-lapse video. It is a shooting technique that can compress time and reproduce the process of slowly changing scenes in a short time.
  • a normal frame rate for example, 24 frames per second
  • the frame sampling frequency in the time-lapse photography function is preset when the mobile phone leaves the factory, or the frame sampling frequency can be manually set by the user. Once the frame rate is determined, the mobile phone will collect frames according to the set frame rate during time-lapse photography and form a time-lapse video.
  • the mobile phone will collect frames according to the set frame rate during time-lapse photography and form a time-lapse video.
  • using a fixed frame rate to extract each frame will make the rapid change of the flower blooming moment, sunrise moment and other fast-changing wonderful pictures less and shorter.
  • the pictures extracted are many and relatively long, which cannot highlight the wonderful parts of time-lapse photography, and the user experience is not high.
  • This application provides a recording method and electronic equipment for time-lapse photography, which can record time-lapse photography video with a dynamically changing frame rate, so as to highlight the highlights of the time-lapse photography and improve the user's use Experience.
  • this application provides a recording method for time-lapse photography, including: an electronic device displays a preview interface of the time-lapse photography mode; in response to a recording operation performed by the user (for example, the operation of clicking the recording button in the preview interface, etc.), The electronic device can start recording each frame captured by the camera; in this recording process, the electronic device can use the first frame rate to extract X(X) from the N1 frames captured during the first recording period.
  • the electronic device can use the second frame drawing frequency (the second frame drawing frequency is different from the first frame drawing frequency) to extract Y from the N2 frames of the shooting images collected during the second recording time period (Y ⁇ N2) frame shooting screen; in response to the user's stop recording operation (for example, the user clicks the record button again in the preview interface), the electronic device can extract the M frames of shooting screen (including The above-mentioned X-frame shooting picture and Y-frame shooting picture) are encoded as time-lapse photography video.
  • the electronic device can dynamically use different frame extraction frequencies to extract the shooting pictures being recorded to form this time-lapse photography video.
  • the time-lapse photography video produced by the electronic device can dynamically present the changes of the shooting content in different recording time periods, which improves the photography effect of the time-lapse photography and the user experience.
  • the electronic device after the electronic device starts to record each frame of the shooting image captured by the camera, it further includes: if it is detected that the variation of the shooting content in the shooting image during the first recording time period is less than the threshold, then the electronic device The device can determine that the frame sampling frequency in the first recording time period is the first frame sampling frequency; if it is detected that the change in the shooting content in the shooting image during the second recording time period is greater than or equal to the threshold, the electronic device can determine the second The frame sampling frequency in the recording period is the second frame sampling frequency, and the second frame sampling frequency is greater than the first frame sampling frequency.
  • the electronic device can select the corresponding frame extraction frequency according to the change of the shooting content in the shooting screen to extract the frames to form a time-lapse video.
  • the change of the shooting content in the shooting picture is not obvious, it means that the motion speed of the shooting content is relatively slow at this time, and the electronic device can use a lower frame drawing frequency to extract the frame picture.
  • the change of the shooting content in the shooting screen is more obvious, it means that the shooting content is moving faster at this time.
  • the electronic device can use a higher frame sampling frequency to extract more frames, and more frames can be more accurate Reflect the changes in the details of the shooting content, so that the time-lapse photography video recorded by the electronic device can focus on the highlights that change quickly, thereby improving the photography effect of the time-lapse photography and the user experience.
  • the electronic device after the electronic device starts to record each frame of the shooting picture captured by the camera, it further includes: the electronic device calculates the optical flow intensity between two adjacent frames of the shooting picture collected each time, and the optical flow intensity is used for Reflects the change range of the shooting content in two adjacent frames of shooting; then, if it is detected that the optical flow intensity between two adjacent frames of shooting during the first recording period is less than the first preset value, it indicates that the scene in the shooting screen If the change is not obvious, the electronic device can determine that the frame sampling frequency in the first recording period is the first frame sampling frequency with a smaller value; if it is detected that there is a gap between two adjacent frames in the second recording period The optical flow intensity of is greater than or equal to the first preset value, indicating that the scene changes in the shooting image are more obvious, and the electronic device can determine the frame sampling frequency in the second recording period as the second frame sampling frequency with a larger value .
  • the electronic device may determine that the frame sampling frequency in the first recording time period is the first frame sampling frequency; if it is detected that the N2 frames of shooting images collected during the second recording time period belong to the preset second shooting scene, the electronic device may determine that the frame sampling frequency in the second recording time period is the second frame sampling frequency.
  • the movement speed of the scene in the first shooting scene is slow, and the movement speed of the scene in the first shooting scene is faster. In this way, the time-lapse photography video finally produced by the electronic device can accurately present the dynamic process of scene changes in different shooting scenes, and improve the shooting effect of the time-lapse photography video and the user experience.
  • the electronic device may determine that the frame drawing frequency in the first recording time period is the first frame drawing frequency; if the first recording time period is detected 2.
  • the moving speed of the shooting target during the recording time period is greater than or equal to the second preset value, the electronic device can determine that the frame drawing frequency in the second recording time period is the second frame drawing frequency, so as to focus on the rapid movement of the shooting target Presented in time-lapse video.
  • the electronic device before the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, it further includes: the electronic device uses the third frame sampling frequency to collect N3 from the third recording time period. Extracting Z frames from the frame shooting picture (this Z frame shooting picture is part of the M frame shooting picture extracted by the electronic device), the third frame sampling frequency is different from the above second frame sampling frequency and the first frame sampling frequency Similarly, the third recording time period does not overlap with the foregoing second recording time period and the first recording time period.
  • the method further includes: the electronic device displays each frame of the shooting picture being recorded in the preview picture in real time.
  • the electronic device when the electronic device displays each frame of the shooting frame being recorded in the preview screen in real time, it also includes: the electronic device displays the current recording duration in the preview screen in real time, and is related to the recording
  • the playback time of the time-lapse photography video corresponding to the duration allows the user to know the video length of the time-lapse photography video in real time.
  • the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, including: the electronic device encodes the extracted M-frame shooting picture into this time-lapse photography according to a preset frame rate Time-lapse video.
  • the method further includes: in response to an operation of the user to open the time-lapse photography video, the electronic device uses a preset frame rate Play the time-lapse video.
  • the electronic device uses a preset frame rate Play the time-lapse video.
  • this application provides an electronic device, including: a touch screen, one or more processors, one or more memories, one or more cameras, and one or more computer programs; wherein the processor and the touch screen, the memory And the camera are both coupled, the above one or more computer programs are stored in the memory, when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so that the electronic device executes any of the above The recording method of time-lapse photography.
  • the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the time-lapse photography recording method described in any one of the first aspect.
  • the present application provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the recording method of time-lapse photography as described in any one of the first aspect.
  • the electronic equipment described in the second aspect, the computer storage medium described in the third aspect, and the computer program product described in the fourth aspect provided above are all used to execute the corresponding methods provided above.
  • the beneficial effects that can be achieved please refer to the beneficial effects in the corresponding method provided above, which will not be repeated here.
  • FIG. 1 is a first structural diagram of an electronic device according to an embodiment of the application
  • FIG. 2 is a schematic diagram of a photographing principle provided by an embodiment of the application.
  • FIG. 3 is a schematic flowchart of a recording method for time-lapse photography according to an embodiment of the application
  • FIG. 4 is a schematic diagram 1 of a scene of a recording method for time-lapse photography according to an embodiment of the application;
  • FIG. 5 is a second schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 6 is a third scene schematic diagram of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 7 is a fourth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 8 is a schematic diagram five of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 9 is a sixth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 10 is a seventh schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 11 is an eighth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 12 is a scene schematic diagram 9 of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 13 is a tenth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 14 is a schematic diagram eleventh scene of a recording method for time-lapse photography according to an embodiment of the application.
  • 15 is a schematic diagram twelfth of a scene of a recording method for time-lapse photography according to an embodiment of this application;
  • FIG. 16 is a second structural diagram of an electronic device provided by an embodiment of this application.
  • FIG. 17 is a third structural diagram of an electronic device provided by an embodiment of this application.
  • the recording method of time-lapse photography can be applied to mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, and personal digital computers.
  • Electronic devices such as personal digital assistants (PDAs), wearable electronic devices, virtual reality devices, etc., are not limited in the embodiments of the present application.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor through an I2C interface, so that the processor 110 and the touch sensor communicate through the I2C bus interface to realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely illustrative and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 can receive input from the battery 142 and/or the charging management module 140, and supply power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can be used to monitor performance parameters such as battery capacity, battery cycle times, battery charging voltage, battery discharging voltage, and battery health status (such as leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include one or more filters, switches, power amplifiers, low noise amplifiers (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating one or more communication processing modules.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the mobile phone 100 may include 1 or N cameras, and N is a positive integer greater than 1.
  • the camera 193 may be a front camera or a rear camera. As shown in FIG. 2, the camera 193 generally includes a lens and a sensor.
  • the photosensitive element may be a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor). ) And other arbitrary photosensitive devices.
  • the reflected light of the object being photographed can generate an optical image after passing through the lens.
  • the optical image is projected onto the photosensitive element, and the photosensitive element converts the received light signal into an electrical signal, and further,
  • the camera 193 sends the obtained electrical signal to a DSP (Digital Signal Processing) module for digital signal processing, and finally obtains a frame of digital image.
  • DSP Digital Signal Processing
  • the DSP can obtain a continuous multi-frame digital image according to the above-mentioned shooting principle, and the continuous multi-frame digital image can be encoded according to a certain frame rate to form a video.
  • one or more frames of digital images output by the DSP can be output on the mobile phone 100 through the display 194, or the digital images can be stored in the internal memory 121 (or the external memory 120), which is not done in the embodiment of the application. Any restrictions.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
  • the processor 110 can execute the above-mentioned instructions stored in the internal memory 121 to enable the electronic device 100 to execute the method for intelligently recommending contacts provided in some embodiments of the present application, as well as various functional applications and data processing.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
  • the data storage area can store data (such as photos, contacts, etc.) created during the use of the electronic device 101.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, universal flash storage (UFS), etc.
  • the processor 110 executes the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor to cause the electronic device 100 to execute the smart device provided in the embodiments of the present application. Recommended number method, as well as various functional applications and data processing.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with one or more microphones 170C.
  • the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals.
  • the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the sensor 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
  • a pressure sensor a gyroscope sensor
  • an air pressure sensor a magnetic sensor
  • an acceleration sensor a distance sensor
  • a proximity light sensor a fingerprint sensor
  • a temperature sensor a touch sensor
  • an ambient light sensor a bone conduction sensor
  • the electronic device 100 provided in the embodiment of the present application may further include one or more devices such as the button 190, the motor 191, the indicator 192, and the SIM card interface 195, which is not limited in the embodiment of the present application.
  • Time-lapse photography also called time-lapse photography or time-lapse video recording
  • time-lapse photography is a shooting technique that can compress time and reproduce the process of slowly changing scenes in a short time.
  • the electronic device 100 can start to record each frame of the shooting picture captured by the camera 193.
  • the electronic device 100 can extract M (M ⁇ N) frames from the N (N>1) frames captured by the camera 193 at a certain frame rate as the time-lapse photography video of this time-lapse photography.
  • the electronic device 100 can play the extracted M frames of shooting images at a certain frame rate, so as to reproduce the N frames of shooting images actually captured by the electronic device 100 through the M frames of shooting images Changes in the scene.
  • Optical flow is the instantaneous velocity of the pixel movement of a spatial moving object on an imaging plane (for example, a photographed picture).
  • the time interval is very small (for example, between two consecutive frames in the video)
  • the optical flow is also equivalent to the displacement of the target point.
  • the human eye observes a moving object
  • the scene of the object forms a series of continuously changing images on the retina of the human eye.
  • This series of continuously changing information constantly "flows through" the retina (i.e. imaging plane), like a kind of light. "Flow", so called optical flow.
  • Optical flow expresses the violent degree of image change, and it contains the motion information of objects between adjacent frames.
  • the position of point A in the t-th frame shooting picture is (x1, y1)
  • the position of point A in the t+1-th frame shooting picture is (x2, y2)
  • point A is in the t-th frame shooting picture
  • the optical flow intensity of each pixel between two adjacent frames can be calculated according to the above method. Then, based on the optical flow intensity of each pixel in the shooting image, the electronic device 100 can determine the optical flow intensity between the t-th frame shooting image and the t+1-th frame shooting image using a preset optical flow algorithm. Similarly, when the intensity of the optical flow is greater, it means that the shooting content in the t-th frame and the t+1-th frame has a greater change; when the intensity of the optical flow is smaller, it means that the t-th frame and the t-th frame have changed. In the +1 frame shooting screen, the change in the shooting content is small.
  • the electronic device 100 can use a dynamic frame sampling frequency to extract M (M ⁇ N) frames from the N frames captured by the camera 193 as the original Time-lapse video. For example, if the optical flow intensity of two adjacent frames of the captured images in the first 10 seconds is less than the preset value, it means that the degree of change in the captured images captured in the first 10 seconds is small. At this time, the electronic device 100 can use the smaller first The frame rate (e.g., one frame every 3 seconds) extracts the shots taken in the previous 10 seconds. If the optical flow intensities of the two adjacent frames in the 11-20 seconds are greater than the preset value, it means that the previously captured images in the 11-20 seconds have changed greatly.
  • M M ⁇ N
  • the electronic device 100 can be used
  • the larger second frame rate (for example, one frame every 0.5 second) extracts the shooting pictures taken within 11-20 seconds, and finally, the M frames of shooting pictures extracted by the electronic device 100 can form the video of this time-lapse photography .
  • the electronic device can select the corresponding frame extraction frequency according to the changes of the shooting content in the shooting screen during the recording process to extract frames to form a time-lapse photography video.
  • the change of the shooting content in the shooting picture is not obvious, it means that the motion speed of the shooting content is relatively slow at this time, and the electronic device can use a lower frame drawing frequency to extract the frame picture.
  • the change of the shooting content in the shooting screen is more obvious, it means that the shooting content is moving faster at this time.
  • the electronic device can use a higher frame sampling frequency to extract more frames, and more frames can be more accurate Reflect the changes in the details of the shooting content, so that the time-lapse photography video recorded by the electronic device can focus on the highlights that change quickly, thereby improving the photography effect of the time-lapse photography and the user experience.
  • the following will take a mobile phone as an example of the above-mentioned electronic device 100 to describe in detail a recording method of time-lapse photography provided in an embodiment of the present application. As shown in FIG. 3, the method includes steps S301-S306.
  • the mobile phone In response to the recording operation performed by the user in the time-lapse photography mode, the mobile phone starts to collect each frame of the shooting picture captured by the camera.
  • the camera application of the mobile phone has the function option of time-lapse photography.
  • the preview interface 401 displayed by the mobile phone is set with a function option 402 of time-lapse photography. If it is detected that the user selects the function option 402, the mobile phone can enter the time-lapse photography mode of the camera application.
  • one or more shooting modes such as photo mode, portrait mode, panorama mode, video mode, or slow motion mode can also be set in the preview interface 401, and the embodiment of the present application does not impose any limitation on this.
  • the preview interface 401 can display the shooting image 403 currently captured by the camera. Since the time-lapse photography video has not yet been recorded at this time, the shooting screen 403 displayed in real time in the preview interface 401 may be referred to as a preview screen.
  • the preview interface 401 also includes a recording button 404 for time-lapse photography. If it is detected that the user clicks the record button 404 in the preview interface 401, it means that the user has performed the recording operation in the time-lapse photography mode. At this time, the mobile phone can continue to use the camera to capture each frame of the camera captured by the camera and start recording the delay Photography video.
  • the mobile phone can collect each frame of the shooting image at a certain collection frequency. Taking the acquisition frequency of 30 frames per second for example, after the user clicks the record button 404 in the preview interface 401, the phone can capture 30 frames of shooting within 0-1 seconds, and 30 frames of shooting within 1-2 seconds Picture. As the recording time elapses, the number of frames of the shooting pictures collected by the mobile phone will gradually accumulate. The multi-frame shooting pictures extracted by the mobile phone from these collected shooting pictures according to the frame rate can finally form the time-lapse photography. Time-lapse video.
  • the mobile phone can copy the video stream formed by each frame of the captured picture into a dual-channel video stream.
  • Each video stream includes every frame captured in real time after the mobile phone turns on the time-lapse recording function.
  • the mobile phone can use one of the video streams to execute the following step S302 to complete the display task of the preview interface during the time-lapse photography.
  • the mobile phone can use another video stream to perform the following steps S303-S305 to complete the time-lapse video production task.
  • the mobile phone displays each frame of the captured shooting picture in the preview interface in real time.
  • step S302 after the mobile phone starts to record the time-lapse photography video, as shown in FIG. 5, the mobile phone can display in the preview interface 401 in real time each frame of the shooting picture 501 currently collected by the camera.
  • the mobile phone may display a shooting prompt 502 in the preview interface 401 to remind the user that the user is currently in the recording state.
  • the mobile phone can also load corresponding animation special effects on the recording button 404 to remind the user that the user is currently in the recording state.
  • the mobile phone may also display the currently recorded duration 503 in the preview interface 401.
  • the recorded time 503 may reflect the current recording time.
  • the mobile phone can also display the duration 504 of the time-lapse video corresponding to the currently recorded duration 503 in the preview interface 401. For example, if the mobile phone plays the final time-lapse photography video at a frame rate of 30 frames per second, when the duration 504 of the time-lapse photography video is 1 second, 30 frames of shooting pictures need to be extracted from the shooting pictures collected by the mobile phone. If the frame rate of the mobile phone extracting the shooting picture is one frame per second, the mobile phone can extract 30 frames of the shooting picture every 30 seconds.
  • the duration 504 of the corresponding time-lapse photography video is 1 second. In this way, through time-lapse photography, a video recorded in a longer period of time can be compressed into a shorter time-lapse photography video, thereby reproducing the process of slowly changing scenes in a short period of time.
  • S303 The mobile phone determines the optical flow intensity of two adjacent frames of the captured N frames of captured images.
  • step S303 as shown in FIG. 6, after the mobile phone activates the recording function of time-lapse photography, N frames of shooting pictures can be collected at a certain collection frequency, where N dynamically changes with the length of the recording time.
  • the preset optical flow algorithm can be used to calculate the optical flow intensity Q between the first frame and the second frame (1) .
  • the greater the optical flow intensity Q(1) the greater the degree of change in the image between the first frame and the second frame, that is, the faster the movement speed of the scene in the first frame and the second frame. fast.
  • the mobile phone can continue to calculate the optical flow intensity Q(2) between the second frame and the third frame; after the mobile phone obtains the fourth frame, the mobile phone It can continue to calculate the optical flow intensity Q(3) between the shooting screen of the third frame and the shooting screen of the fourth frame,..., after the mobile phone obtains the shooting screen of the Nth frame, the mobile phone can continue to calculate the The optical flow intensity Q(N-1) between the shots of the Nth frame.
  • the optical flow intensity Q calculated by the mobile phone each time can reflect the degree of change of the two adjacent frames. Furthermore, as shown in Fig. 7, the mobile phone can determine the change of the optical flow intensity Q with the recording time during the recording of this time-lapse video according to the optical flow intensity Q between two adjacent frames of the shooting pictures obtained each time. Situation, that is, the change curve of optical flow intensity Q-recording time (referred to as optical flow curve in subsequent embodiments).
  • the optical flow curve can reflect the changes of the shooting picture during the recording process. It should be noted that since the length of the recording time is generally manually operated by the user, the optical flow curve can dynamically change with the length of the recording time.
  • the shooting content in the shots before and after sunrise generally changes slowly. Therefore, it is the same as before and after sunrise.
  • the value of the optical flow intensity Q corresponding to a period of time is relatively small. If the mobile phone extracts the pictures taken at a higher frame rate during the two periods of time before and after the sunrise, the resulting time-lapse photography video will have more pictures before and after sunrise. There is no significant change between shots.
  • the mobile phone can dynamically set the frame drawing frequency of each recording period according to the above optical flow curve by executing the following step S304, so as to improve the photographic effect of time-lapse photography.
  • the mobile phone can detect the magnitude of the optical flow intensity Q in the optical flow curve formed in step S303 in real time. Still taking the optical flow curve shown in Figure 7 as an example, if it is detected that the optical flow intensity Q corresponding to the optical flow curve from the first second to the tenth second is less than the first threshold, it means that the first second to the tenth second The shot scene has not changed drastically.
  • the mobile phone can set the frame rate from the 1st to the 10th second to the lower first frame rate. For example, the first frame rate is 1 frame every 2 seconds. Take a picture. Then, as shown in FIG. 8, the mobile phone can extract 5 frames of shooting images at the first frame rate of 1 frame every 2 seconds from the N1 frames of shooting images collected from the 1st to 10th seconds.
  • the mobile phone can set the frame rate from the 10th second to the 15th second to the second frame rate with a higher value.
  • the second frame rate is every Take 1 frame of shooting in 1 second.
  • the mobile phone can extract 5 frames of shooting images at the second sampling frequency of 1 frame every 1 second from the N2 frames of shooting images collected from the 10th to 15th seconds.
  • the mobile phone can continue to dynamically adjust the frame rate of different recording time periods according to the optical flow curve after the 15th second.
  • the embodiment of this application does not do anything about this. limit.
  • the mobile phone when it is detected that the change in the scene in the shooting picture is not obvious, can use a lower first frame extraction frequency to extract frames from the shooting picture collected by the mobile phone; when the change in the scene in the shooting picture is detected When it is more obvious, the mobile phone can use a higher second frame extraction frequency to extract frames from the shooting images collected by the mobile phone. In this way, when the captured scene changes significantly, the mobile phone can extract a larger number of shots through a higher second frame rate, and these more shots can more accurately reflect the changes in the details of the shot content. Thus, in the final recorded time-lapse photography video, the dynamic process when the scene changes significantly is highlighted.
  • the mobile phone can also dynamically adjust the frame drawing frequency in different recording time periods in units of a preset time window. As shown in (a) in Figure 10, the mobile phone can preset a length of 5 seconds The unit time window is 1001. The mobile phone can use the time window 1001 from the starting position of the optical flow curve to calculate the average value of the optical flow intensity Q in the time window 1001, that is, the average value of the optical flow intensity Q from the first second to the fifth second.
  • the mobile phone can extract frames according to the higher second frame sampling frequency for the shooting images collected in the time window 1001; if the calculated average value is less than the threshold value, the mobile phone can extract frames according to the The low first frame sampling frequency performs frame sampling on the captured images collected in the time window 1001. Furthermore, as shown in (b) in Figure 10, the mobile phone can move the time window 1001 to the next 5 seconds (that is, the 5th to the 10th second) of the optical flow curve, and the mobile phone can repeat the calculation The average value of the optical flow intensity Q in the time window 1001 after each time window 1001 moves. In this way, the mobile phone can dynamically adjust the frame sampling frequency in different recording time periods when recording the time-lapse video in the unit of 5 seconds.
  • the size of the above-mentioned time window may be fixed or may be dynamically changed, and the embodiment of the present application does not impose any limitation on this.
  • the method for dynamically adjusting the frame rate is explained by taking the setting of two frame sampling frequencies (ie, the first frame rate and the second frame rate) for dynamic adjustment of the mobile phone as an example. It is understandable that the mobile phone can also set three or more frame draw frequencies for the mobile phone to dynamically adjust.
  • the mobile phone can preset the first threshold and the second threshold of the optical flow intensity (the second threshold is greater than the first threshold).
  • the mobile phone can extract the X frames of the shooting pictures at the time 0-T1 according to the first frame rate; when the T1 in the optical flow curve is detected -When the optical flow intensity Q at time T2 is greater than the first threshold and less than the second threshold, the mobile phone can extract the Y frames at time T1-T2 according to the second frame rate (the second frame rate is greater than the first frame rate) Picture; when it is detected that the optical flow intensity Q after T2 in the optical flow curve is greater than the second threshold, the mobile phone can extract Z after T2 at the third frame rate (the third frame rate is greater than the second frame rate) Frame the picture.
  • the mobile phone may also provide the user with the function of manually setting the frame rate for different recording periods.
  • the user can manually enter the frame rate for different recording periods in the camera's setting interface.
  • the user can set the recording time of the entire time-lapse photography to 30 minutes. Within these 30 minutes, the frame rate set by the user in the first 10 minutes and the last 10 minutes is 1 frame per second. The frame rate set by the user in minutes is 1 frame taken every 0.5 seconds.
  • the mobile phone can dynamically adjust the corresponding frame sampling frequency in different recording time periods according to the frame sampling frequency set by the user, and the embodiment of the present application does not impose any limitation on this.
  • the mobile phone In response to the recording stop operation performed by the user in the time-lapse photography mode, the mobile phone encodes the extracted M-frame shooting picture into a time-lapse photography video, and the M-frame shooting picture includes the foregoing X-frame shooting picture and Y-frame shooting picture.
  • the user can click the record button 404 in the preview interface 401 again.
  • the mobile phone may encode the M frames of shooting pictures extracted at this time into this time-lapse photography video in chronological order.
  • the extracted M frames of shooting pictures include the respective shooting pictures extracted by the mobile phone according to the corresponding frame sampling frequency in different recording time periods.
  • the mobile phone may encode the 300 frames of shooting images according to the preset frame rate. Taking the preset frame rate of 30 frames per second as an example, the mobile phone can encode the extracted 300 frames of shooting images into a time-lapse video of 10 seconds at a frame rate of 30 frames per second.
  • a method for dynamically adjusting the frame rate during time-lapse photography is provided by the mobile phone by calculating the optical flow intensity of the captured image as an example.
  • the correspondence between different shooting scenes and different frame sampling frequencies may be stored in the mobile phone in advance. For example, when the shooting scene is a scene where the scene changes slowly, such as a sky scene, the corresponding frame extraction frequency is one frame every 2 seconds; when the shooting scene is a scene where the scene changes quickly, such as a bird scene, the corresponding frame extraction frequency Take one frame every 0.5 seconds.
  • the mobile phone can recognize in real time the shooting scene being shot in the captured shooting screen.
  • the mobile phone can recognize the shooting scene in the shooting screen through image analysis or AI (artificial intelligence) recognition algorithms.
  • AI artificial intelligence
  • the mobile phone can extract one frame every 2 seconds from the shooting screen captured from the 1st to the 10th second
  • the first frame frequency of the shooting picture is to extract the shooting picture.
  • the mobile phone can extract the second shot of the shooting frame every 0.5 seconds from the shooting images collected from the 11th to the 14th second.
  • the frame rate extracts the captured image.
  • the mobile phone can encode the extracted shooting picture into this time-lapse video.
  • the mobile phone when the mobile phone records time-lapse photography video, it can use different frame extraction frequencies in different shooting scenes to extract the shooting images of the time-lapse photography video from the shooting images recorded by the mobile phone, so that the final time-lapse photography is formed Video can accurately present the dynamic process of scene changes in different shooting scenes, improving the shooting effect of time-lapse video and the user experience.
  • the mobile phone after the mobile phone starts to record the time-lapse photography video, the mobile phone can recognize the shooting target in the captured shooting frame in real time and the movement speed of the shooting target. As shown in Figure 14, if the mobile phone recognizes that the shooting target of the picture taken from the first second to the fifth second is the person 1401, and the movement speed of the person 1401 is less than the preset value, then the mobile phone can be used in the first second to the fifth second Among the captured shooting images, the shooting images are extracted at the first frame rate at which one frame of the shooting images is extracted every 2 seconds.
  • the mobile phone can capture the shooting frame from the 5th to the 10th second.
  • the captured image is extracted at the second frame rate at which one frame of the captured image is extracted every 0.5 seconds.
  • the mobile phone can encode the extracted shooting picture into this time-lapse video.
  • the foregoing shooting target may be manually selected by the user.
  • the user may select the shooting target in the shooting screen displayed on the preview interface by manual focusing.
  • the aforementioned shooting target may also be automatically recognized by the mobile phone through image analysis or AI recognition and other algorithms, and the embodiment of the present application does not impose any limitation on this.
  • a time-lapse photography video of a sunrise process is taken as an example. It is understandable that the mobile phone can also use the above-mentioned time-lapse photography recording method to record the sunset process, the blooming process, or the flower blooming process.
  • the embodiment of this application does not impose any limitation on this.
  • the phone can determine the optical flow intensity Q between the two adjacent frames of the shooting picture collected each time.
  • the variation range of the shooting content in the shooting picture will not be too large.
  • the optical flow intensity Q between two adjacent shooting pictures is generally small.
  • the variation range of the shooting content in the shooting images will increase.
  • the optical flow intensity Q between two adjacent shooting images is generally larger.
  • the mobile phone can follow the smaller value of the first
  • the frame rate is used to extract X frames of shooting pictures; if it is detected within T2 that the optical flow intensity Q between two adjacent frames of shooting pictures is greater than or equal to the first threshold, the scene changes in the shooting picture are more obvious, then the mobile phone can follow The second sampling frequency with a larger value extracts Y frames of the shooting picture. In this way, the finally formed time-lapse photography video can accurately present the dynamic process of flower bud opening during the blooming process, thereby improving the shooting effect of the time-lapse photography video and the user experience.
  • the mobile phone can not only adjust the frame frequency to extract the captured images according to the optical flow intensity Q between two adjacent frames of shooting images in real time, but also according to other changes that can reflect the changes in the shooting content.
  • the amplitude parameter adjusts the frame rate.
  • the mobile phone can use flowers in the shooting screen as the shooting target to detect the speed of the shooting target.
  • the mobile phone can adjust the frame sampling frequency in real time according to the motion speed of the shooting target to extract the shooting pictures of this time-lapse photography video, which is not limited in the embodiment of the present application.
  • the mobile phone when the mobile phone is recording a time-lapse photography video, if it detects that the movement speed of the shooting target is slow, the mobile phone can use a slower frame extraction frequency to extract the shooting pictures in the time-lapse video from the shooting pictures recorded by the mobile phone. ; If the fast moving speed of the shooting target is detected, the mobile phone can use a higher frame extraction frequency to extract the shooting pictures in the time-lapse photography video from the shooting pictures recorded by the mobile phone, so as to present the fast moving parts of the shooting target. Time-lapse video.
  • the mobile phone can save each frame of the shooting picture collected by the camera during the process of recording the time-lapse video without performing the above steps S303-S305.
  • the mobile phone can dynamically adjust the frame drawing frequency of different recording time periods by performing the above steps S303-S304, and extract each shooting picture according to the corresponding frame drawing frequency during different recording time periods to form This time-lapse photography video does not impose any restriction on this embodiment of the application.
  • the mobile phone plays the aforementioned time-lapse photography video at a preset frame rate.
  • the mobile phone can store the time-lapse photography video obtained in step S305 in the photo album application (also referred to as a gallery application) of the mobile phone.
  • the photo album application also referred to as a gallery application
  • the mobile phone can encode the time-lapse video 1501 at the frame rate (For example, 30 frames/second) play the time-lapse video 1501.
  • the shooting pictures are extracted into the time-lapse video 1501, so the time-lapse video 1501 is displayed There are more shots of this scene change.
  • the frame rate is increased from one frame every 1 second to one frame every 0.5 second.
  • the phone finally picked up the rising sun
  • the extracted shooting images can more vividly and accurately reflect the changes in details when the sun rises, so that the time-lapse photography video 1501 can focus on the dynamic process when the scene changes significantly during playback. , Thereby improving the recording effect of time-lapse photography and the user experience.
  • an embodiment of the present application discloses an electronic device, which can be used to implement the methods described in each of the above method embodiments.
  • the electronic device may specifically include: an acquisition module 1601, a preview module 1602, an optical flow analysis module 1603, a frame extraction module 1604, an encoding module 1605, and a playback module 1606.
  • the collection module 1601 is used to support the electronic device to perform the process S301 in FIG. 3; the preview module 1602 is used to support the electronic device to perform the process S302 in FIG. 3; the optical flow analysis module 1603 is used to support the electronic device to perform the process S303 in FIG. 3;
  • the frame extraction module 1604 is used to support the electronic device to perform the processes S304a, S304b, ... in FIG. 3;
  • the encoding module 1605 is used to support the electronic device to perform the process S305 in FIG. 3;
  • the playback module 1606 is used to support the electronic device to perform the processes in FIG.
  • the process S306. all relevant content of each step involved in the above method embodiment can be cited in the function description of the corresponding function module, and will not be repeated here.
  • an embodiment of the present application discloses an electronic device, including: a touch screen 1701, the touch screen 1701 includes a touch-sensitive surface 1706 and a display screen 1707; one or more processors 1702; memory 1703; one or more One camera 1708; and one or more computer programs 1704.
  • the aforementioned devices may be connected via one or more communication buses 1705.
  • the aforementioned one or more computer programs 1704 are stored in the aforementioned memory 1703 and are configured to be executed by the one or more processors 1702, and the one or more computer programs 1704 include instructions, and the aforementioned instructions can be used to execute the aforementioned Each step in the embodiment should be implemented.
  • the foregoing processor 1702 may specifically be the processor 110 shown in FIG. 1
  • the foregoing memory 1703 may specifically be the internal memory 121 and/or the external memory 120 shown in FIG. 1
  • the foregoing display screen 1707 may specifically be FIG.
  • the aforementioned camera 1708 may specifically be the camera 193 shown in FIG. 1
  • the aforementioned touch-sensitive surface 1706 may specifically be a touch sensor in the sensor module 180 shown in FIG. 1, which is not done in this embodiment of the application. Any restrictions.
  • the functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • a computer readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Abstract

The present application relates to the technical field of terminals, and provides a recording method for time-lapse photography, and an electronic device. A time-lapse photography video can be recorded at the frame extraction frequency which dynamically changes so as to emphatically present the highlight having a high image change degree in time-lapse photography and improve the user experience. The method comprises: an electronic device displays a preview interface in a time-lapse photography mode; in response to a recording operation executed by a user, the electronic device starts to record frames of photography images captured by a camera; the electronic device extracts X frames of photography images at a first frame extraction frequency from N1 frames of photography images acquired within a first recording period of time; the electronic device extracts Y frames of photography images at a second frame extraction frequency from N2 frames of photography images acquired within a second recording period of time, the second frame extraction frequency being different from the first frame extraction frequency; in response to a recording stopping operation executed by the user, the electronic device encodes extracted M frames of photography images into a time-lapse photography video.

Description

一种延时摄影的录制方法及电子设备Time-lapse photography recording method and electronic equipment
本申请要求在2019年3月25日提交中国国家知识产权局、申请号为201910229645.5、发明名称为“一种延时摄影的录制方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office of China, the application number is 201910229645.5, and the invention title is "a recording method and electronic equipment for time-lapse photography" on March 25, 2019. The entire content of the application is approved The reference is incorporated in this application.
技术领域Technical field
本申请涉及终端技术领域,尤其涉及一种延时摄影的录制方法及电子设备。This application relates to the field of terminal technology, and in particular to a recording method and electronic equipment for time-lapse photography.
背景技术Background technique
手机等电子设备中的相机应用中一般设置有延时摄影功能。延时摄影也可称为缩时摄影(time-lapse photography)或缩时录影,是一种可将时间压缩的拍摄技术,可在短时间内再现景物缓慢变化的过程。Camera applications in electronic devices such as mobile phones generally have a time-lapse photography function. Time-lapse photography can also be called time-lapse photography or time-lapse video. It is a shooting technique that can compress time and reproduce the process of slowly changing scenes in a short time.
以用户使用手机中的延时摄影功能拍摄花朵的开放过程举例。花蕾的开放大约需要3天3夜(约72小时),用户启动延时摄影功能后,手机可以从正在拍摄的视频中以一定的抽帧频率抽取正在拍摄的画面帧。例如,手机可每半小时抽取一幅画面帧,按照抽帧顺序记录开花动作的微变。这样,72小时后手机可获取到144个画面帧。进而,手机可按照正常的帧率(例如24帧/秒)按顺序播放这144个画面帧,从而在6秒(即144/30=6)内重现手机在3天3夜中拍摄到的开花过程。Take the example of the user using the time-lapse photography function in the mobile phone to photograph the blooming process of flowers. It takes about 3 days and 3 nights (approximately 72 hours) for the buds to open. After the user activates the time-lapse photography function, the mobile phone can extract frames from the video being shot at a certain frame rate. For example, a mobile phone can extract a picture frame every half an hour, and record the slight changes in the flowering movement in the order of frame drawing. In this way, the phone can obtain 144 frames after 72 hours. Furthermore, the mobile phone can play these 144 frames in sequence at a normal frame rate (for example, 24 frames per second), thereby reproducing the blossoms captured by the mobile phone in 3 days and 3 nights within 6 seconds (ie, 144/30=6) process.
一般,延时摄影功能中的抽帧频率是手机出厂时预设的,或者,该抽帧频率可以是用户手动设置的。一旦抽帧频率确定后,手机在进行延时摄影时便会按照设置好的抽帧频率采集画面帧并形成延时摄影视频。但是,在拍摄花开过程、日出过程等变化场景时,使用固定的抽帧频率抽取各个画面帧会使得花开瞬间、日出瞬间等变化较快的精彩画面较少且较为短暂,而花开或日出准备过程中抽取的画面较多且较为漫长,无法突出延时摄影中的精彩部分,用户的使用体验不高。Generally, the frame sampling frequency in the time-lapse photography function is preset when the mobile phone leaves the factory, or the frame sampling frequency can be manually set by the user. Once the frame rate is determined, the mobile phone will collect frames according to the set frame rate during time-lapse photography and form a time-lapse video. However, when shooting flower blooming process, sunrise process and other changing scenes, using a fixed frame rate to extract each frame will make the rapid change of the flower blooming moment, sunrise moment and other fast-changing wonderful pictures less and shorter. During the opening or sunrise preparation process, the pictures extracted are many and relatively long, which cannot highlight the wonderful parts of time-lapse photography, and the user experience is not high.
发明内容Summary of the invention
本申请提供一种延时摄影的录制方法及电子设备,可使用动态变化的抽帧频率录制延时摄影视频,从而将延时摄影中画面变化程度较高的精彩部分重点呈现,提高用户的使用体验。This application provides a recording method and electronic equipment for time-lapse photography, which can record time-lapse photography video with a dynamically changing frame rate, so as to highlight the highlights of the time-lapse photography and improve the user's use Experience.
为达到上述目的,本申请采用如下技术方案:In order to achieve the above objectives, this application adopts the following technical solutions:
第一方面,本申请提供一种延时摄影的录制方法,包括:电子设备显示延时摄影模式的预览界面;响应于用户执行的录制操作(例如在预览界面中点击录制按钮的操作等),电子设备可以开始录制摄像头捕捉到的每一帧拍摄画面;在本次录制过程中,电子设备可使用第一抽帧频率从第一录制时间段内采集到的N1帧拍摄画面中抽取X(X<N1)帧拍摄画面;并且,电子设备可使用第二抽帧频率(第二抽帧频率与第一抽帧频率不相同)从第二录制时间段内采集到的N2帧拍摄画面中抽取Y(Y<N2)帧拍摄画面;响应于用户执行的停止录制操作(例如用户在预览界面中再次点击录制按钮的操作),电子设备可将本次录制过程中抽取到的M帧拍摄画面(包括上述X帧拍摄画面和Y帧拍摄画面)编码为延时摄影视频。In the first aspect, this application provides a recording method for time-lapse photography, including: an electronic device displays a preview interface of the time-lapse photography mode; in response to a recording operation performed by the user (for example, the operation of clicking the recording button in the preview interface, etc.), The electronic device can start recording each frame captured by the camera; in this recording process, the electronic device can use the first frame rate to extract X(X) from the N1 frames captured during the first recording period. <N1) Frame shooting images; and the electronic device can use the second frame drawing frequency (the second frame drawing frequency is different from the first frame drawing frequency) to extract Y from the N2 frames of the shooting images collected during the second recording time period (Y<N2) frame shooting screen; in response to the user's stop recording operation (for example, the user clicks the record button again in the preview interface), the electronic device can extract the M frames of shooting screen (including The above-mentioned X-frame shooting picture and Y-frame shooting picture) are encoded as time-lapse photography video.
也就是说,在一次延时摄影视频的拍摄过程中,电子设备可动态的使用不同抽帧频率抽取正在录制的拍摄画面,形成本次延时摄影视频。这样,电子设备制作的延时摄影视频可以 动态的呈现出不同录制时间段内拍摄内容的变化,提高延时摄影的摄影效果以及用户的使用体验。That is to say, during the shooting process of a time-lapse photography video, the electronic device can dynamically use different frame extraction frequencies to extract the shooting pictures being recorded to form this time-lapse photography video. In this way, the time-lapse photography video produced by the electronic device can dynamically present the changes of the shooting content in different recording time periods, which improves the photography effect of the time-lapse photography and the user experience.
在一种可能的实现方式中,在电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,还包括:若检测到第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则电子设备可确定第一录制时间段内的抽帧频率为第一抽帧频率;若检测到第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于该阈值,则电子设备可确定第二录制时间段内的抽帧频率为第二抽帧频率,第二抽帧频率大于第一抽帧频率。In a possible implementation manner, after the electronic device starts to record each frame of the shooting image captured by the camera, it further includes: if it is detected that the variation of the shooting content in the shooting image during the first recording time period is less than the threshold, then the electronic device The device can determine that the frame sampling frequency in the first recording time period is the first frame sampling frequency; if it is detected that the change in the shooting content in the shooting image during the second recording time period is greater than or equal to the threshold, the electronic device can determine the second The frame sampling frequency in the recording period is the second frame sampling frequency, and the second frame sampling frequency is greater than the first frame sampling frequency.
也就是说,电子设备在录制过程中可根据拍摄画面中拍摄内容的变化情况选择对应的抽帧频率进行抽帧,形成延时摄影视频。当拍摄画面中拍摄内容的变化不太明显时,说明此时拍摄内容的运动速度较为缓慢,电子设备可使用较低的抽帧频率抽取帧画面。而当拍摄画面中拍摄内容的变化较为明显时,说明此时拍摄内容的运动速度较快,电子设备可使用较高的抽帧频率抽取更多的帧画面,更多的帧画面可以更加准确的反映拍摄内容的细节变化,这样,电子设备录制的延时摄影视频中可对变化速度较快的精彩部分重点呈现,从而提高延时摄影的摄影效果以及用户的使用体验。That is to say, during the recording process, the electronic device can select the corresponding frame extraction frequency according to the change of the shooting content in the shooting screen to extract the frames to form a time-lapse video. When the change of the shooting content in the shooting picture is not obvious, it means that the motion speed of the shooting content is relatively slow at this time, and the electronic device can use a lower frame drawing frequency to extract the frame picture. When the change of the shooting content in the shooting screen is more obvious, it means that the shooting content is moving faster at this time. The electronic device can use a higher frame sampling frequency to extract more frames, and more frames can be more accurate Reflect the changes in the details of the shooting content, so that the time-lapse photography video recorded by the electronic device can focus on the highlights that change quickly, thereby improving the photography effect of the time-lapse photography and the user experience.
示例性的,在电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,还包括:电子设备计算每次采集到的相邻两帧拍摄画面之间的光流强度,该光流强度用于反映相邻两帧拍摄画面中拍摄内容的变化幅度;那么,如果检测到第一录制时间段内相邻两帧拍摄画面之间的光流强度均小于第一预设值,说明拍摄画面中景物的变化不太明显,则电子设备可确定第一录制时间段内的抽帧频率为取值较小的第一抽帧频率;若检测到第二录制时间段内相邻两帧拍摄画面之间的光流强度均大于或等于第一预设值,说明拍摄画面中景物的变化较为明显,则电子设备可确定第二录制时间段内的抽帧频率为取值较大的第二抽帧频率。Exemplarily, after the electronic device starts to record each frame of the shooting picture captured by the camera, it further includes: the electronic device calculates the optical flow intensity between two adjacent frames of the shooting picture collected each time, and the optical flow intensity is used for Reflects the change range of the shooting content in two adjacent frames of shooting; then, if it is detected that the optical flow intensity between two adjacent frames of shooting during the first recording period is less than the first preset value, it indicates that the scene in the shooting screen If the change is not obvious, the electronic device can determine that the frame sampling frequency in the first recording period is the first frame sampling frequency with a smaller value; if it is detected that there is a gap between two adjacent frames in the second recording period The optical flow intensity of is greater than or equal to the first preset value, indicating that the scene changes in the shooting image are more obvious, and the electronic device can determine the frame sampling frequency in the second recording period as the second frame sampling frequency with a larger value .
或者,如果检测到第一录制时间段内采集到的N1帧拍摄画面属于预设的第一拍摄场景,则电子设备可确定第一录制时间段内的抽帧频率为第一抽帧频率;如果检测到第二录制时间段内采集到的N2帧拍摄画面属于预设的第二拍摄场景,则电子设备可确定第二录制时间段内的抽帧频率为第二抽帧频率。一般,第一拍摄场景中景物的运动速度较慢,而第一拍摄场景中景物的运动速度较快。这样,电子设备最终制作的延时摄影视频可准确呈现出不同拍摄场景下景物变化的动态过程,提高延时摄影视频的拍摄效果和用户的使用体验。Or, if it is detected that the N1 frames of shooting images collected during the first recording time period belong to the preset first shooting scene, the electronic device may determine that the frame sampling frequency in the first recording time period is the first frame sampling frequency; if If it is detected that the N2 frames of shooting images collected during the second recording time period belong to the preset second shooting scene, the electronic device may determine that the frame sampling frequency in the second recording time period is the second frame sampling frequency. Generally, the movement speed of the scene in the first shooting scene is slow, and the movement speed of the scene in the first shooting scene is faster. In this way, the time-lapse photography video finally produced by the electronic device can accurately present the dynamic process of scene changes in different shooting scenes, and improve the shooting effect of the time-lapse photography video and the user experience.
又或者,如果检测到第一录制时间段内拍摄目标的运动速度小于第二预设值,则电子设备可确定第一录制时间段内的抽帧频率为第一抽帧频率;如果检测到第二录制时间段内拍摄目标的运动速度大于或等于第二预设值,则电子设备可确定第二录制时间段内的抽帧频率为第二抽帧频率,从而将拍摄目标快速运动的部分重点呈现在延时摄影视频中。Or, if it is detected that the movement speed of the shooting target in the first recording time period is less than the second preset value, the electronic device may determine that the frame drawing frequency in the first recording time period is the first frame drawing frequency; if the first recording time period is detected 2. The moving speed of the shooting target during the recording time period is greater than or equal to the second preset value, the electronic device can determine that the frame drawing frequency in the second recording time period is the second frame drawing frequency, so as to focus on the rapid movement of the shooting target Presented in time-lapse video.
在一种可能的实现方式中,在电子设备将抽取到的M帧拍摄画面编码为延时摄影视频之前,还包括:电子设备使用第三抽帧频率从第三录制时间段内采集到的N3帧拍摄画面中抽取Z帧拍摄画面(这Z帧拍摄画面为电子设备抽取到的M帧拍摄画面中的一部分),第三抽帧频率与上述第二抽帧频率和第一抽帧频率均不相同,第三录制时间段与上述第二录制时间段和第一录制时间段均不重叠。In a possible implementation manner, before the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, it further includes: the electronic device uses the third frame sampling frequency to collect N3 from the third recording time period. Extracting Z frames from the frame shooting picture (this Z frame shooting picture is part of the M frame shooting picture extracted by the electronic device), the third frame sampling frequency is different from the above second frame sampling frequency and the first frame sampling frequency Similarly, the third recording time period does not overlap with the foregoing second recording time period and the first recording time period.
在一种可能的实现方式中,在电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,还包括:电子设备在该预览画面中实时显示正在录制的每一帧拍摄画面。In a possible implementation manner, after the electronic device starts to record each frame of the shooting picture captured by the camera, the method further includes: the electronic device displays each frame of the shooting picture being recorded in the preview picture in real time.
在一种可能的实现方式中,当电子设备在该预览画面中实时显示正在录制的每一帧拍摄画面时,还包括:电子设备在该预览画面中实时显示当前的录制时长,以及与该录制时长对 应的延时摄影视频的播放时长,使用户可以实时获知本次延时摄影视频的视频长度。In a possible implementation manner, when the electronic device displays each frame of the shooting frame being recorded in the preview screen in real time, it also includes: the electronic device displays the current recording duration in the preview screen in real time, and is related to the recording The playback time of the time-lapse photography video corresponding to the duration allows the user to know the video length of the time-lapse photography video in real time.
在一种可能的实现方式中,电子设备将抽取到的M帧拍摄画面编码为延时摄影视频,包括:电子设备按照预设帧率将抽取到的M帧拍摄画面编码为本次延时摄影的延时摄影视频。In a possible implementation manner, the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, including: the electronic device encodes the extracted M-frame shooting picture into this time-lapse photography according to a preset frame rate Time-lapse video.
在一种可能的实现方式中,在电子设备将抽取到的M帧拍摄画面编码为延时摄影视频之后,还包括:响应于用户打开该延时摄影视频的操作,电子设备按照预设帧率播放该延时摄影视频。在播放该延时摄影视频时,由于电子设备在景物运动较快时抽取到的拍摄画面较多,因此电子设备可将延时摄影录制过程中画面变化程度较高的精彩部分重点呈现。In a possible implementation manner, after the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, the method further includes: in response to an operation of the user to open the time-lapse photography video, the electronic device uses a preset frame rate Play the time-lapse video. When playing the time-lapse photography video, since the electronic device extracts more shooting pictures when the scene moves quickly, the electronic device can highlight the highlights of the high-level picture change during the time-lapse photography recording process.
第二方面,本申请提供一种电子设备,包括:触摸屏、一个或多个处理器、一个或多个存储器、一个或多个摄像头以及一个或多个计算机程序;其中,处理器与触摸屏、存储器和摄像头均耦合,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述任一项所述的延时摄影的录制方法。In a second aspect, this application provides an electronic device, including: a touch screen, one or more processors, one or more memories, one or more cameras, and one or more computer programs; wherein the processor and the touch screen, the memory And the camera are both coupled, the above one or more computer programs are stored in the memory, when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so that the electronic device executes any of the above The recording method of time-lapse photography.
第三方面,本申请提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面中任一项所述的延时摄影的录制方法。In a third aspect, the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the time-lapse photography recording method described in any one of the first aspect.
第四方面,本申请提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如第一方面中任一项所述的延时摄影的录制方法。In a fourth aspect, the present application provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the recording method of time-lapse photography as described in any one of the first aspect.
可以理解地,上述提供的第二方面所述的电子设备、第三方面所述的计算机存储介质,以及第四方面所述的计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Understandably, the electronic equipment described in the second aspect, the computer storage medium described in the third aspect, and the computer program product described in the fourth aspect provided above are all used to execute the corresponding methods provided above. For the beneficial effects that can be achieved, please refer to the beneficial effects in the corresponding method provided above, which will not be repeated here.
附图说明Description of the drawings
图1为本申请实施例提供的一种电子设备的结构示意图一;FIG. 1 is a first structural diagram of an electronic device according to an embodiment of the application;
图2为本申请实施例提供的一种拍摄原理示意图;FIG. 2 is a schematic diagram of a photographing principle provided by an embodiment of the application;
图3为本申请实施例提供的一种延时摄影的录制方法的流程示意图;3 is a schematic flowchart of a recording method for time-lapse photography according to an embodiment of the application;
图4为本申请实施例提供的一种延时摄影的录制方法的场景示意图一;4 is a schematic diagram 1 of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图5为本申请实施例提供的一种延时摄影的录制方法的场景示意图二;5 is a second schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图6为本申请实施例提供的一种延时摄影的录制方法的场景示意图三;FIG. 6 is a third scene schematic diagram of a recording method for time-lapse photography according to an embodiment of the application;
图7为本申请实施例提供的一种延时摄影的录制方法的场景示意图四;FIG. 7 is a fourth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图8为本申请实施例提供的一种延时摄影的录制方法的场景示意图五;FIG. 8 is a schematic diagram five of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图9为本申请实施例提供的一种延时摄影的录制方法的场景示意图六;FIG. 9 is a sixth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图10为本申请实施例提供的一种延时摄影的录制方法的场景示意图七;10 is a seventh schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图11为本申请实施例提供的一种延时摄影的录制方法的场景示意图八;11 is an eighth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图12为本申请实施例提供的一种延时摄影的录制方法的场景示意图九;FIG. 12 is a scene schematic diagram 9 of a recording method for time-lapse photography according to an embodiment of the application;
图13为本申请实施例提供的一种延时摄影的录制方法的场景示意图十;FIG. 13 is a tenth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application;
图14为本申请实施例提供的一种延时摄影的录制方法的场景示意图十一;14 is a schematic diagram eleventh scene of a recording method for time-lapse photography according to an embodiment of the application;
图15为本申请实施例提供的一种延时摄影的录制方法的场景示意图十二;15 is a schematic diagram twelfth of a scene of a recording method for time-lapse photography according to an embodiment of this application;
图16为本申请实施例提供的一种电子设备的结构示意图二;FIG. 16 is a second structural diagram of an electronic device provided by an embodiment of this application;
图17为本申请实施例提供的一种电子设备的结构示意图三。FIG. 17 is a third structural diagram of an electronic device provided by an embodiment of this application.
具体实施方式detailed description
下面将结合附图对本实施例的实施方式进行详细描述。The implementation of this embodiment will be described in detail below with reference to the accompanying drawings.
示例性的,本申请实施例提供的一种延时摄影的录制方法可应用于手机、平板电脑、笔 记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、手持计算机、上网本、个人数字助理(personal digital assistant,PDA)、可穿戴电子设备、虚拟现实设备等电子设备,本申请实施例对此不做任何限制。Exemplarily, the recording method of time-lapse photography provided by the embodiments of this application can be applied to mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, and personal digital computers. Electronic devices such as personal digital assistants (PDAs), wearable electronic devices, virtual reality devices, etc., are not limited in the embodiments of the present application.
示例性的,图1示出了电子设备100的结构示意图。Exemplarily, FIG. 1 shows a schematic structural diagram of an electronic device 100.
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2. , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc. Among them, the different processing units may be independent devices or integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 to store instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface. receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器,使处理器110与触摸传感器通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor, charger, flash, camera 193, etc. through different I2C bus interfaces. For example, the processor 110 may couple the touch sensor through an I2C interface, so that the processor 110 and the touch sensor communicate through the I2C bus interface to realize the touch function of the electronic device 100.
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, the UART interface is generally used to connect the processor 110 and the wireless communication module 160. For example, the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function. In some embodiments, the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices. The MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100. The processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured through software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on. GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely illustrative and does not constitute a structural limitation of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is used to receive charging input from the charger. Among them, the charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive the charging input of the wired charger through the USB interface 130. In some embodiments of wireless charging, the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141可接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 can receive input from the battery 142 and/or the charging management module 140, and supply power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
电源管理模块141可用于监测电池容量,电池循环次数,电池充电电压,电池放电电压,电池健康状态(例如漏电,阻抗)等性能参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 can be used to monitor performance parameters such as battery capacity, battery cycle times, battery charging voltage, battery discharging voltage, and battery health status (such as leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。The antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna can be used in combination with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括一个或多个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁 波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100. The mobile communication module 150 may include one or more filters, switches, power amplifiers, low noise amplifiers (LNA), etc. The mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110. In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194. In some embodiments, the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成一个或多个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating one or more communication processing modules. The wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110. The wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, etc. The display screen 194 includes a display panel. The display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode). AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc. In some embodiments, the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处 理器等实现拍摄功能。The electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye. ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
摄像头193用于捕获静态图像或视频。在一些实施例中,手机100可以包括1个或N个摄像头,N为大于1的正整数。摄像头193可以是前置摄像头也可以是后置摄像头。如图2所示,摄像头193一般包括镜头(lens)和感光元件(sensor),该感光元件可以为CCD(charge-coupled device,电荷耦合元件)或者CMOS(complementary metal oxide semiconductor,互补金属氧化物半导体)等任意感光器件。The camera 193 is used to capture still images or videos. In some embodiments, the mobile phone 100 may include 1 or N cameras, and N is a positive integer greater than 1. The camera 193 may be a front camera or a rear camera. As shown in FIG. 2, the camera 193 generally includes a lens and a sensor. The photosensitive element may be a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor). ) And other arbitrary photosensitive devices.
仍如图2所示,在拍摄过程中,被拍摄物体的反射光线经过镜头后可生成光学图像,该光学图像投射到感光元件上,感光元件将接收到的光信号转换为电信号,进而,摄像头193将得到的电信号发送至DSP(Digital Signal Processing,数字信号处理)模块进行数字信号处理,最终得到一帧数字图像。As shown in FIG. 2, during the shooting process, the reflected light of the object being photographed can generate an optical image after passing through the lens. The optical image is projected onto the photosensitive element, and the photosensitive element converts the received light signal into an electrical signal, and further, The camera 193 sends the obtained electrical signal to a DSP (Digital Signal Processing) module for digital signal processing, and finally obtains a frame of digital image.
类似的,在录制视频的过程中,DSP可按照上述拍摄原理得到连续的多帧的数字图像,这连续的多帧数字图像按照一定帧率编码后可形成一段视频。由于人类眼睛的特殊生理结构,当所看画面的帧率高于16帧/秒(fps)时,人眼就会认为是看到的画面是连贯的,此现象可称为视觉停留。为了保证用户观看视频的连贯性,手机可按照一定帧率(例如24fps或30fps)对DSP输出的多帧数字图像进行编码。例如,如果DSP通过摄像头193采集到300帧数字图像,则手机可按照30fps的预设帧率将这300帧数字图像编码为一段10秒(300帧/30fps=10)的视频。Similarly, in the process of recording a video, the DSP can obtain a continuous multi-frame digital image according to the above-mentioned shooting principle, and the continuous multi-frame digital image can be encoded according to a certain frame rate to form a video. Due to the special physiological structure of the human eye, when the frame rate of the picture being viewed is higher than 16 frames per second (fps), the human eye will consider the picture to be seen as coherent. This phenomenon can be called visual stay. In order to ensure the continuity of users watching the video, the mobile phone can encode the multi-frame digital image output by the DSP according to a certain frame rate (for example, 24fps or 30fps). For example, if the DSP collects 300 frames of digital images through the camera 193, the mobile phone can encode these 300 frames of digital images into a video of 10 seconds (300 frames/30fps=10) at a preset frame rate of 30fps.
其中,DSP输出的一帧或多帧数字图像可通过显示屏194在手机100上输出,也可以将该数字图像存储在内部存储器121(或外部存储器120)中,本申请实施例对此不做任何限制。Among them, one or more frames of digital images output by the DSP can be output on the mobile phone 100 through the display 194, or the digital images can be stored in the internal memory 121 (or the external memory 120), which is not done in the embodiment of the application. Any restrictions.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, for example, the transfer mode between human brain neurons, it can quickly process input information and can continuously learn by itself. The NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备100执行本申请一些实施例中所提供的联系人智能推荐的方法,以及各种功能应用和数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用程序(比如图库、联系人等)等。存储数据区可存储电 子设备101使用过程中所创建的数据(比如照片,联系人等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。在另一些实施例中,处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,来使得电子设备100执行本申请实施例中所提供的智能推荐号码的方法,以及各种功能应用和数据处理。The internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions. The processor 110 can execute the above-mentioned instructions stored in the internal memory 121 to enable the electronic device 100 to execute the method for intelligently recommending contacts provided in some embodiments of the present application, as well as various functional applications and data processing. The internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on. The data storage area can store data (such as photos, contacts, etc.) created during the use of the electronic device 101. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, universal flash storage (UFS), etc. In other embodiments, the processor 110 executes the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor to cause the electronic device 100 to execute the smart device provided in the embodiments of the present application. Recommended number method, as well as various functional applications and data processing.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal. The audio module 170 can also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。The speaker 170A, also called a "speaker", is used to convert audio electrical signals into sound signals. The electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。The receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置一个或多个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone", "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with one or more microphones 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used to connect wired earphones. The earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
传感器180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等,本申请实施例对此不做任何限制。The sensor 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc. The embodiments of the application This does not impose any restrictions.
当然,本申请实施例提供的电子设备100还可以包括按键190、马达191、指示器192以及SIM卡接口195等一项或多项器件,本申请实施例对此不做任何限制。Of course, the electronic device 100 provided in the embodiment of the present application may further include one or more devices such as the button 190, the motor 191, the indicator 192, and the SIM card interface 195, which is not limited in the embodiment of the present application.
为了方便清楚地理解下述各实施例,首先给出相关技术的简要介绍:In order to facilitate a clear understanding of the following embodiments, a brief introduction of related technologies is first given:
延时摄影,也可称为缩时摄影或缩时录影,是一种可将时间压缩的拍摄技术,可在短时间内再现景物缓慢变化的过程。开启延时摄影功能后,电子设备100可开始录制摄像头193捕捉到的每一帧拍摄画面。并且,电子设备100可按照一定的抽帧频率从摄像头193捕捉到的N(N>1)帧拍摄画面中抽取M(M<N)帧拍摄画面作为本次延时摄影的延时摄影视频。后续,当用户打开该延时摄影视频时,电子设备100可按照一定的帧率播放抽取到的上述M帧拍摄画面,从而通过这M帧拍摄画面重现电子设备100实际拍摄的N帧拍摄画面中的景物变化。Time-lapse photography, also called time-lapse photography or time-lapse video recording, is a shooting technique that can compress time and reproduce the process of slowly changing scenes in a short time. After the time-lapse photography function is turned on, the electronic device 100 can start to record each frame of the shooting picture captured by the camera 193. In addition, the electronic device 100 can extract M (M<N) frames from the N (N>1) frames captured by the camera 193 at a certain frame rate as the time-lapse photography video of this time-lapse photography. Subsequently, when the user opens the time-lapse photography video, the electronic device 100 can play the extracted M frames of shooting images at a certain frame rate, so as to reproduce the N frames of shooting images actually captured by the electronic device 100 through the M frames of shooting images Changes in the scene.
光流(optical flow),是空间运动物体在成像平面(例如拍摄画面)上的像素运动的瞬时速度。当时间间隔很小(比如视频中连续的前后两帧之间)时,光流也等同于目标点的位移。 在人眼观察运动物体时,物体的景象在人眼的视网膜上形成一系列连续变化的图像,这一系列连续变化的信息不断“流过”视网膜(即成像平面),好像一种光的“流”,故称之为光流。Optical flow (optical flow) is the instantaneous velocity of the pixel movement of a spatial moving object on an imaging plane (for example, a photographed picture). When the time interval is very small (for example, between two consecutive frames in the video), the optical flow is also equivalent to the displacement of the target point. When the human eye observes a moving object, the scene of the object forms a series of continuously changing images on the retina of the human eye. This series of continuously changing information constantly "flows through" the retina (i.e. imaging plane), like a kind of light. "Flow", so called optical flow.
光流表达了图像变化的剧烈程度,它包含了相邻帧之间物体的运动信息。例如,在第t帧拍摄画面中A点的位置为(x1,y1),如果在第t+1帧拍摄画面中A点的位置为(x2,y2),则A点在第t帧拍摄画面和第t+1帧拍摄画面之间的光流强度可以由二维矢量u表示。其中,u=(u,v)=(x2,y2)-(x1,y1)。当光流强度越大时,说明A点此时在拍摄画面中的运动幅度较大、运动速度更快;当光流强度越小时,说明A点此时在拍摄画面中的运动幅度较小,运动速度较慢。Optical flow expresses the violent degree of image change, and it contains the motion information of objects between adjacent frames. For example, the position of point A in the t-th frame shooting picture is (x1, y1), if the position of point A in the t+1-th frame shooting picture is (x2, y2), then point A is in the t-th frame shooting picture The intensity of the optical flow between the captured image and the t+1th frame can be represented by a two-dimensional vector u. Among them, u=(u,v)=(x2,y2)-(x1,y1). When the intensity of the optical flow is greater, it means that point A has a larger movement amplitude and faster movement speed in the shooting picture at this time; when the intensity of the optical flow is smaller, it means that the movement amplitude of point A in the shooting picture is smaller at this time. Movement speed is slower.
由于每一帧拍摄画面都是由多个像素点构成的,每个像素点在相邻两帧之间的光流强度均可按照上述方法计算。那么,基于拍摄画面中每个像素点的光流强度,电子设备100使用预设的光流算法可以确定出第t帧拍摄画面和第t+1帧拍摄画面之间的光流强度。同样,当该光流强度越大时,说明第t帧拍摄画面和第t+1帧拍摄画面中拍摄内容的变化较大;当该光流强度越小时,说明第t帧拍摄画面和第t+1帧拍摄画面中拍摄内容的变化较小。Since each frame of the shooting picture is composed of multiple pixels, the optical flow intensity of each pixel between two adjacent frames can be calculated according to the above method. Then, based on the optical flow intensity of each pixel in the shooting image, the electronic device 100 can determine the optical flow intensity between the t-th frame shooting image and the t+1-th frame shooting image using a preset optical flow algorithm. Similarly, when the intensity of the optical flow is greater, it means that the shooting content in the t-th frame and the t+1-th frame has a greater change; when the intensity of the optical flow is smaller, it means that the t-th frame and the t-th frame have changed. In the +1 frame shooting screen, the change in the shooting content is small.
在本申请实施例中,电子设备100开启延时摄影功能后,电子设备100可使用动态的抽帧频率从摄像头193捕捉到的N帧拍摄画面中抽取M(M<N)帧拍摄画面作为本次延时摄影的视频。例如,如果前10秒中相邻两帧拍摄画面的光流强度均小于预设值,说明前10秒拍摄的拍摄画面的变化程度较小,此时,电子设备100可使用较小的第一抽帧频率(例如每3秒抽取一帧)抽取前10秒内拍摄的拍摄画面。如果第11-20秒中相邻两帧拍摄画面的光流强度均大于预设值,说明前在第11-20秒拍摄到的拍摄画面的变化程度较大,此时,电子设备100可使用较大的第二抽帧频率(例如每0.5秒抽取一帧)抽取第11-20秒内拍摄的拍摄画面,最终,电子设备100抽取到的M帧拍摄画面可形成本次延时摄影的视频。In the embodiment of the present application, after the electronic device 100 turns on the time-lapse photography function, the electronic device 100 can use a dynamic frame sampling frequency to extract M (M<N) frames from the N frames captured by the camera 193 as the original Time-lapse video. For example, if the optical flow intensity of two adjacent frames of the captured images in the first 10 seconds is less than the preset value, it means that the degree of change in the captured images captured in the first 10 seconds is small. At this time, the electronic device 100 can use the smaller first The frame rate (e.g., one frame every 3 seconds) extracts the shots taken in the previous 10 seconds. If the optical flow intensities of the two adjacent frames in the 11-20 seconds are greater than the preset value, it means that the previously captured images in the 11-20 seconds have changed greatly. At this time, the electronic device 100 can be used The larger second frame rate (for example, one frame every 0.5 second) extracts the shooting pictures taken within 11-20 seconds, and finally, the M frames of shooting pictures extracted by the electronic device 100 can form the video of this time-lapse photography .
也就是说,在本申请实施例提供的延时摄影方法中,电子设备可在录制过程中根据拍摄画面中拍摄内容的变化情况选择对应的抽帧频率进行抽帧,形成延时摄影视频。当拍摄画面中拍摄内容的变化不太明显时,说明此时拍摄内容的运动速度较为缓慢,电子设备可使用较低的抽帧频率抽取帧画面。而当拍摄画面中拍摄内容的变化较为明显时,说明此时拍摄内容的运动速度较快,电子设备可使用较高的抽帧频率抽取更多的帧画面,更多的帧画面可以更加准确的反映拍摄内容的细节变化,这样,电子设备录制的延时摄影视频中可对变化速度较快的精彩部分重点呈现,从而提高延时摄影的摄影效果以及用户的使用体验。That is to say, in the time-lapse photography method provided by the embodiments of the present application, the electronic device can select the corresponding frame extraction frequency according to the changes of the shooting content in the shooting screen during the recording process to extract frames to form a time-lapse photography video. When the change of the shooting content in the shooting picture is not obvious, it means that the motion speed of the shooting content is relatively slow at this time, and the electronic device can use a lower frame drawing frequency to extract the frame picture. When the change of the shooting content in the shooting screen is more obvious, it means that the shooting content is moving faster at this time. The electronic device can use a higher frame sampling frequency to extract more frames, and more frames can be more accurate Reflect the changes in the details of the shooting content, so that the time-lapse photography video recorded by the electronic device can focus on the highlights that change quickly, thereby improving the photography effect of the time-lapse photography and the user experience.
以下将以手机为上述电子设备100举例,详细阐述本申请实施例提供的一种延时摄影的录制方法,如图3所示,该方法包括步骤S301-S306。The following will take a mobile phone as an example of the above-mentioned electronic device 100 to describe in detail a recording method of time-lapse photography provided in an embodiment of the present application. As shown in FIG. 3, the method includes steps S301-S306.
S301、响应于用户在延时摄影模式中执行的录制操作,手机开始采集摄像头捕捉到的每一帧拍摄画面。S301. In response to the recording operation performed by the user in the time-lapse photography mode, the mobile phone starts to collect each frame of the shooting picture captured by the camera.
一般,手机的相机应用中均设置有延时摄影的功能选项。如图4所示,检测到用户打开相机应用后,手机显示的预览界面401中设置有延时摄影的功能选项402。如果检测到用户选中该功能选项402,手机可进入相机应用的延时摄影模式。当然,预览界面401中还可以设置照片模式、人像模式、全景模式、视频模式或慢动作模式等一项或多项拍摄模式,本申请实施例对此不做任何限制。Generally, the camera application of the mobile phone has the function option of time-lapse photography. As shown in FIG. 4, after detecting that the user opens the camera application, the preview interface 401 displayed by the mobile phone is set with a function option 402 of time-lapse photography. If it is detected that the user selects the function option 402, the mobile phone can enter the time-lapse photography mode of the camera application. Of course, one or more shooting modes such as photo mode, portrait mode, panorama mode, video mode, or slow motion mode can also be set in the preview interface 401, and the embodiment of the present application does not impose any limitation on this.
手机进入延时摄影模式后,预览界面401可显示摄像头当前捕捉到的拍摄画面403。由于此时还未开始录制延时摄影视频,因此,预览界面401中实时显示的拍摄画面403可称为预览画面。另外,预览界面401中还包括延时摄影的录制按钮404。如果检测到用户点击预 览界面401中的录制按钮404,说明用户在延时摄影模式下执行了录制操作,此时,手机可继续使用摄像头采集摄像头捕捉到的每一帧拍摄画面,开始录制延时摄影视频。After the mobile phone enters the time-lapse photography mode, the preview interface 401 can display the shooting image 403 currently captured by the camera. Since the time-lapse photography video has not yet been recorded at this time, the shooting screen 403 displayed in real time in the preview interface 401 may be referred to as a preview screen. In addition, the preview interface 401 also includes a recording button 404 for time-lapse photography. If it is detected that the user clicks the record button 404 in the preview interface 401, it means that the user has performed the recording operation in the time-lapse photography mode. At this time, the mobile phone can continue to use the camera to capture each frame of the camera captured by the camera and start recording the delay Photography video.
例如,手机可以按照一定的采集频率采集每一帧拍摄画面。以采集频率为每秒30帧举例,用户点击预览界面401中的录制按钮404后,手机在0-1秒内可采集到30帧拍摄画面,在1-2秒内也可采集到30帧拍摄画面。随着录制时间的推移,手机采集到的拍摄画面的帧数将逐渐累积,手机从这些采集到的拍摄画面中按照抽帧频率抽取出的多帧拍摄画面最终可形成本次延时摄影得到的延时摄影视频。For example, the mobile phone can collect each frame of the shooting image at a certain collection frequency. Taking the acquisition frequency of 30 frames per second for example, after the user clicks the record button 404 in the preview interface 401, the phone can capture 30 frames of shooting within 0-1 seconds, and 30 frames of shooting within 1-2 seconds Picture. As the recording time elapses, the number of frames of the shooting pictures collected by the mobile phone will gradually accumulate. The multi-frame shooting pictures extracted by the mobile phone from these collected shooting pictures according to the frame rate can finally form the time-lapse photography. Time-lapse video.
在本申请实施例中,手机可将采集到的每一帧拍摄画面形成的视频流复制为双路视频流。每一路视频流中均包括手机开启延时摄影录制功能后实时采集到的每一帧拍摄画面。进而,手机可使用其中的一路视频流执行下述步骤S302,完成延时摄影过程中预览界面的显示任务。同时,手机可使用另一路视频流执行下述步骤S303-S305,完成延时摄影视频的制作任务。In the embodiment of the present application, the mobile phone can copy the video stream formed by each frame of the captured picture into a dual-channel video stream. Each video stream includes every frame captured in real time after the mobile phone turns on the time-lapse recording function. Furthermore, the mobile phone can use one of the video streams to execute the following step S302 to complete the display task of the preview interface during the time-lapse photography. At the same time, the mobile phone can use another video stream to perform the following steps S303-S305 to complete the time-lapse video production task.
S302、手机在预览界面中实时显示采集到的每一帧拍摄画面。S302. The mobile phone displays each frame of the captured shooting picture in the preview interface in real time.
在步骤S302中,手机开始录制延时摄影视频后,如图5所示,手机可在预览界面401中实时显示摄像头当前采集到的每一帧拍摄画面501。并且,手机可在预览界面401中显示正在拍摄的提示502,以提示用户当前处于录制状态。当然,手机还可以在录制按钮404上加载相应的动画特效,以提示用户当前处于录制状态。In step S302, after the mobile phone starts to record the time-lapse photography video, as shown in FIG. 5, the mobile phone can display in the preview interface 401 in real time each frame of the shooting picture 501 currently collected by the camera. In addition, the mobile phone may display a shooting prompt 502 in the preview interface 401 to remind the user that the user is currently in the recording state. Of course, the mobile phone can also load corresponding animation special effects on the recording button 404 to remind the user that the user is currently in the recording state.
在一些实施例中,仍如图5所示,手机还可以在预览界面401中显示当前已录制的时长503。已录制的时长503可反映出当前录制时间的长短。并且,手机还可以在预览界面401中显示与当前已录制的时长503对应的延时摄影视频的时长504。例如,如果手机按照30帧/秒的帧率播放最终形成的延时摄影视频,则当延时摄影视频的时长504为1秒时,需要从手机采集的拍摄画面中抽取30帧拍摄画面。如果手机抽取拍摄画面的抽帧频率为每秒抽取一帧,则手机每30秒可抽取到30帧拍摄画面。也就是说,当已录制的时长503为30秒时,对应的延时摄影视频的时长504为1秒。这样,通过延时摄影可以将较长时间内录制的视频压缩为较短时间的延时摄影视频,从而在短时间内再现景物缓慢变化的过程。In some embodiments, as shown in FIG. 5, the mobile phone may also display the currently recorded duration 503 in the preview interface 401. The recorded time 503 may reflect the current recording time. In addition, the mobile phone can also display the duration 504 of the time-lapse video corresponding to the currently recorded duration 503 in the preview interface 401. For example, if the mobile phone plays the final time-lapse photography video at a frame rate of 30 frames per second, when the duration 504 of the time-lapse photography video is 1 second, 30 frames of shooting pictures need to be extracted from the shooting pictures collected by the mobile phone. If the frame rate of the mobile phone extracting the shooting picture is one frame per second, the mobile phone can extract 30 frames of the shooting picture every 30 seconds. That is, when the recorded duration 503 is 30 seconds, the duration 504 of the corresponding time-lapse photography video is 1 second. In this way, through time-lapse photography, a video recorded in a longer period of time can be compressed into a shorter time-lapse photography video, thereby reproducing the process of slowly changing scenes in a short period of time.
S303、手机确定采集到的N帧拍摄画面中相邻两帧拍摄画面的光流强度。S303: The mobile phone determines the optical flow intensity of two adjacent frames of the captured N frames of captured images.
在步骤S303中,如图6所示,手机启动延时摄影的录制功能后可按照一定采集频率采集N帧拍摄画面,其中,N随着录制时间的长短动态变化。示例性的,手机获取到第1帧拍摄画面和第2帧拍摄画面后,可使用预设的光流算法计算第1帧拍摄画面与第2帧拍摄画面之间的光流强度Q(1)。光流强度Q(1)越大,说明第1帧拍摄画面与第2帧拍摄画面之间的图像的变化程度越大,即第1帧拍摄画面与第2帧拍摄画面中景物的运动速度越快。类似的,手机获取到第3帧拍摄画面后,手机可继续计算第2帧拍摄画面与第3帧拍摄画面之间的光流强度Q(2);手机获取到第4帧拍摄画面后,手机可继续计算第3帧拍摄画面与第4帧拍摄画面之间的光流强度Q(3),……,手机获取到第N帧拍摄画面后,手机可继续计算第N-1帧拍摄画面与第N帧拍摄画面之间的光流强度Q(N-1)。In step S303, as shown in FIG. 6, after the mobile phone activates the recording function of time-lapse photography, N frames of shooting pictures can be collected at a certain collection frequency, where N dynamically changes with the length of the recording time. Exemplarily, after the mobile phone obtains the first frame and the second frame, the preset optical flow algorithm can be used to calculate the optical flow intensity Q between the first frame and the second frame (1) . The greater the optical flow intensity Q(1), the greater the degree of change in the image between the first frame and the second frame, that is, the faster the movement speed of the scene in the first frame and the second frame. fast. Similarly, after the mobile phone obtains the third frame, the mobile phone can continue to calculate the optical flow intensity Q(2) between the second frame and the third frame; after the mobile phone obtains the fourth frame, the mobile phone It can continue to calculate the optical flow intensity Q(3) between the shooting screen of the third frame and the shooting screen of the fourth frame,..., after the mobile phone obtains the shooting screen of the Nth frame, the mobile phone can continue to calculate the The optical flow intensity Q(N-1) between the shots of the Nth frame.
每次手机计算出的光流强度Q可以反映出相邻两帧拍摄画面的变化程度。进而,如图7所示,手机根据每次获取到的相邻两帧拍摄画面之间的光流强度Q,可确定在录制本次延时摄影视频过程中光流强度Q随录制时间的变化情况,即光流强度Q-录制时间的变化曲线(后续实施例中称为光流曲线)。该光流曲线可以反映出录制过程中拍摄画面的变化情况。需要说明的是,由于录制时间的长短一般是用户手动操作的,因此该光流曲线可随着录制时间的长 短动态变化。The optical flow intensity Q calculated by the mobile phone each time can reflect the degree of change of the two adjacent frames. Furthermore, as shown in Fig. 7, the mobile phone can determine the change of the optical flow intensity Q with the recording time during the recording of this time-lapse video according to the optical flow intensity Q between two adjacent frames of the shooting pictures obtained each time. Situation, that is, the change curve of optical flow intensity Q-recording time (referred to as optical flow curve in subsequent embodiments). The optical flow curve can reflect the changes of the shooting picture during the recording process. It should be noted that since the length of the recording time is generally manually operated by the user, the optical flow curve can dynamically change with the length of the recording time.
以拍摄一次日出过程的延时摄影举例,仍如图7所示,在日出前和日出后拍摄画面中的拍摄内容一般变化较为缓慢,因此,与日出前和日出后这两段时间对应的光流强度Q的取值相对较小。如果在拍摄日出前和日出后这两段时间内手机以较高的抽帧频率抽取拍摄画面,则最终形成的延时摄影视频中日出前和日出后的拍摄画面较多且这些拍摄画面之间并没有显著变化。Taking a time-lapse photography of a sunrise process as an example, as shown in Figure 7, the shooting content in the shots before and after sunrise generally changes slowly. Therefore, it is the same as before and after sunrise. The value of the optical flow intensity Q corresponding to a period of time is relatively small. If the mobile phone extracts the pictures taken at a higher frame rate during the two periods of time before and after the sunrise, the resulting time-lapse photography video will have more pictures before and after sunrise. There is no significant change between shots.
而在日出过程中太阳的运动相对速度较快,因此,与日出中这段时间对应的光流强度Q的取值相对较大。如果在拍摄日出中这段时间内手机以较低的抽帧频率抽取拍摄画面,则最终形成的延时摄影视频中与日出相关的拍摄画面较少且这些拍摄画面之间的变化较大。对此,手机通过执行下述步骤S304,可根据上述光流曲线可动态设置各个录制时段的抽帧频率,以提高延时摄影的摄影效果。The relative speed of the sun during the sunrise is relatively fast. Therefore, the value of the optical flow intensity Q corresponding to this period of time during the sunrise is relatively large. If the mobile phone extracts the shooting images at a lower frame rate during the sunrise shooting, the resulting time-lapse video will have fewer sunrise-related shooting images and greater changes between these shooting images . In this regard, the mobile phone can dynamically set the frame drawing frequency of each recording period according to the above optical flow curve by executing the following step S304, so as to improve the photographic effect of time-lapse photography.
S304a、若在T1时间内与N1帧拍摄画面对应的光流强度均小于第一阈值,则手机按照第一抽帧频率从上述N1帧拍摄画面中抽取X帧拍摄画面。S304a: If the optical flow intensity corresponding to the N1 frame of the shooting image within T1 is less than the first threshold, the mobile phone extracts X frames of the shooting image from the N1 frame of the shooting image according to the first frame sampling frequency.
S304b、若在T2时间内与N2帧拍摄画面对应的光流强度均大于第一阈值,则手机按照第二抽帧频率从上述N2帧拍摄画面中抽取Y帧拍摄画面。S304b: If the optical flow intensity corresponding to the N2 frames of the shooting pictures within the T2 time is greater than the first threshold, the mobile phone extracts Y frames of the shooting pictures from the N2 frames of the shooting pictures according to the second frame sampling frequency.
在步骤S304a-S304b中,手机可以实时检测步骤S303形成的光流曲线中光流强度Q的大小。仍以图7所示的光流曲线举例,如果检测到该光流曲线从第1秒至第10秒内所对应的光流强度Q均小于第一阈值,说明第1秒至第10秒内拍摄的景物没有发生剧烈变化,手机可将第1秒至第10秒内的抽帧频率设置为取值较低的第一抽帧频率,例如,第一抽帧频率为每2秒抽取1帧拍摄画面。那么,如图8所示,手机可在第1秒至第10秒内采集的N1帧拍摄画面中,以每2秒抽取1帧的第一抽帧频率抽取到5帧拍摄画面。In steps S304a-S304b, the mobile phone can detect the magnitude of the optical flow intensity Q in the optical flow curve formed in step S303 in real time. Still taking the optical flow curve shown in Figure 7 as an example, if it is detected that the optical flow intensity Q corresponding to the optical flow curve from the first second to the tenth second is less than the first threshold, it means that the first second to the tenth second The shot scene has not changed drastically. The mobile phone can set the frame rate from the 1st to the 10th second to the lower first frame rate. For example, the first frame rate is 1 frame every 2 seconds. Take a picture. Then, as shown in FIG. 8, the mobile phone can extract 5 frames of shooting images at the first frame rate of 1 frame every 2 seconds from the N1 frames of shooting images collected from the 1st to 10th seconds.
相应的,仍以图7所示的光流曲线举例,如果检测到该光流曲线从第10秒至第15秒内所对应的光流强度Q均大于第一阈值,说明第10秒至第15秒内拍摄的景物发生了较为剧烈的变化,手机可将第10秒至第15秒内的抽帧频率设置为取值较高的第二抽帧频率,例如,第二抽帧频率为每1秒抽取1帧拍摄画面。那么,如图9所示,手机可在第10秒至第15秒内采集的N2帧拍摄画面中,以每1秒抽取1帧的第二抽帧频率抽取到5帧拍摄画面。Correspondingly, still taking the optical flow curve shown in Figure 7 as an example, if it is detected that the optical flow intensity Q corresponding to the optical flow curve from the 10th second to the 15th second is greater than the first threshold, it means that the 10th second to the first The scene shot within 15 seconds has changed drastically. The mobile phone can set the frame rate from the 10th second to the 15th second to the second frame rate with a higher value. For example, the second frame rate is every Take 1 frame of shooting in 1 second. Then, as shown in FIG. 9, the mobile phone can extract 5 frames of shooting images at the second sampling frequency of 1 frame every 1 second from the N2 frames of shooting images collected from the 10th to 15th seconds.
当然,如果第15秒后本次延时摄影录制还未结束,手机还可以根据第15秒之后的光流曲线继续动态调整不同录制时间段的抽帧频率,本申请实施例对此不做任何限制。Of course, if the time-lapse photography recording has not ended after the 15th second, the mobile phone can continue to dynamically adjust the frame rate of different recording time periods according to the optical flow curve after the 15th second. The embodiment of this application does not do anything about this. limit.
也就是说,当检测到拍摄画面中景物的变化不太明显时,手机可使用较低的第一抽帧频率从手机采集到的拍摄画面中抽取帧画面;当检测到拍摄画面中景物的变化较为明显时,手机可使用较高的第二抽帧频率从手机采集到的拍摄画面中抽取帧画面。这样,当拍摄的景物发生明显变化时,手机通过较高的第二抽帧频率能够抽取到数目更多的拍摄画面,这些数目更多的拍摄画面能够更加准确的反映出拍摄内容的细节变化,从而在最终录制的延时摄影视频中重点呈现出景物发生明显变化时的动态过程。In other words, when it is detected that the change in the scene in the shooting picture is not obvious, the mobile phone can use a lower first frame extraction frequency to extract frames from the shooting picture collected by the mobile phone; when the change in the scene in the shooting picture is detected When it is more obvious, the mobile phone can use a higher second frame extraction frequency to extract frames from the shooting images collected by the mobile phone. In this way, when the captured scene changes significantly, the mobile phone can extract a larger number of shots through a higher second frame rate, and these more shots can more accurately reflect the changes in the details of the shot content. Thus, in the final recorded time-lapse photography video, the dynamic process when the scene changes significantly is highlighted.
在另一些实施例中,手机也可以以预设的时间窗为单位动态调整不同录制时间段内的抽帧频率,如图10中的(a)所示,手机可预设一个长度为5秒为单位的时间窗1001。手机可从光流曲线的开始位置使用该时间窗1001计算时间窗1001内光流强度Q的平均值,即第1秒至第5秒内光流强度Q的平均值。如果计算出的平均值大于阈值,则手机可按照较高的第二抽帧频率对该时间窗1001内采集到的拍摄画面进行抽帧;如果计算出的平均值小于阈值,则手机可按照较低的第一抽帧频率对该时间窗1001内采集到的拍摄画面进行抽帧。进而,如 图10中的(b)所示,手机可将上述时间窗1001移动至光流曲线的下一个5秒(即第5秒至第10秒)内,进而,手机可重复上述方法计算每次时间窗1001移动后该时间窗1001内光流强度Q的平均值。这样,手机可以5秒为单位动态调整本次录制延时摄影视频时不同录制时间段内的抽帧频率。当然,上述时间窗的大小可以是固定的,也可以是动态变化的,本申请实施例对此不做任何限制。In other embodiments, the mobile phone can also dynamically adjust the frame drawing frequency in different recording time periods in units of a preset time window. As shown in (a) in Figure 10, the mobile phone can preset a length of 5 seconds The unit time window is 1001. The mobile phone can use the time window 1001 from the starting position of the optical flow curve to calculate the average value of the optical flow intensity Q in the time window 1001, that is, the average value of the optical flow intensity Q from the first second to the fifth second. If the calculated average value is greater than the threshold value, the mobile phone can extract frames according to the higher second frame sampling frequency for the shooting images collected in the time window 1001; if the calculated average value is less than the threshold value, the mobile phone can extract frames according to the The low first frame sampling frequency performs frame sampling on the captured images collected in the time window 1001. Furthermore, as shown in (b) in Figure 10, the mobile phone can move the time window 1001 to the next 5 seconds (that is, the 5th to the 10th second) of the optical flow curve, and the mobile phone can repeat the calculation The average value of the optical flow intensity Q in the time window 1001 after each time window 1001 moves. In this way, the mobile phone can dynamically adjust the frame sampling frequency in different recording time periods when recording the time-lapse video in the unit of 5 seconds. Of course, the size of the above-mentioned time window may be fixed or may be dynamically changed, and the embodiment of the present application does not impose any limitation on this.
另外,上述实施例中以设置了两个抽帧频率(即第一抽帧频率和第二抽帧频率)供手机动态调整为例阐述了抽帧频率的动态调整方法。可以理解的是,手机也可以设置三个或更多抽帧频率供手机动态调整。In addition, in the foregoing embodiment, the method for dynamically adjusting the frame rate is explained by taking the setting of two frame sampling frequencies (ie, the first frame rate and the second frame rate) for dynamic adjustment of the mobile phone as an example. It is understandable that the mobile phone can also set three or more frame draw frequencies for the mobile phone to dynamically adjust.
例如,如图11所示,手机可以预先设置光流强度的第一阈值和第二阈值(第二阈值大于第一阈值)。当检测到光流曲线中0-T1时刻的光流强度Q小于第一阈值时,手机可按照第一抽帧频率抽取0-T1时刻内的X帧拍摄画面;当检测到光流曲线中T1-T2时刻的光流强度Q大于第一阈值且小于第二阈值时,手机可按照第二抽帧频率(第二抽帧频率大于第一抽帧频率)抽取T1-T2时刻内的Y帧拍摄画面;当检测到光流曲线中T2时刻之后的光流强度Q大于第二阈值时,手机可按照第三抽帧频率(第三抽帧频率大于第二抽帧频率)抽取T2时刻之后的Z帧拍摄画面。也就是说,当拍摄画面中景物的变化越明显、剧烈时,手机在延时摄影中抽取拍摄画面的抽帧频率越高,相应的,当拍摄画面中景物的变化越缓慢时,手机在延时摄影中抽取拍摄画面的抽帧频率越低。For example, as shown in FIG. 11, the mobile phone can preset the first threshold and the second threshold of the optical flow intensity (the second threshold is greater than the first threshold). When it is detected that the optical flow intensity Q at the time 0-T1 in the optical flow curve is less than the first threshold, the mobile phone can extract the X frames of the shooting pictures at the time 0-T1 according to the first frame rate; when the T1 in the optical flow curve is detected -When the optical flow intensity Q at time T2 is greater than the first threshold and less than the second threshold, the mobile phone can extract the Y frames at time T1-T2 according to the second frame rate (the second frame rate is greater than the first frame rate) Picture; when it is detected that the optical flow intensity Q after T2 in the optical flow curve is greater than the second threshold, the mobile phone can extract Z after T2 at the third frame rate (the third frame rate is greater than the second frame rate) Frame the picture. That is to say, when the change of the scene in the shooting picture is more obvious and violent, the more frequently the mobile phone extracts the frame of the shooting picture in the time-lapse photography. Correspondingly, when the scene change in the shooting picture is slower, the mobile phone will delay. In time photography, the lower the frame rate for extracting the captured image.
在另一些实施例中,手机还可以向用户提供手动设置不同录制时段的抽帧频率的功能。例如,在录制延时摄影视频之前,用户可以手动在相机的设置界面中输入不同录制时段的抽帧频率。例如,用户可以设置整个延时摄影的录制时间为30分钟,在这30分钟内,前10分钟和最后10分钟内用户设置的抽帧频率为每1秒抽取1帧拍摄画面,在中间的10分钟用户设置的抽帧频率为每0.5秒抽取1帧拍摄画面。后续,在录制延时摄影视频的过程中,手机可根据用户设置的抽帧频率在不同录制时间段动态调整相应的抽帧频率,本申请实施例对此不做任何限制。In other embodiments, the mobile phone may also provide the user with the function of manually setting the frame rate for different recording periods. For example, before recording a time-lapse video, the user can manually enter the frame rate for different recording periods in the camera's setting interface. For example, the user can set the recording time of the entire time-lapse photography to 30 minutes. Within these 30 minutes, the frame rate set by the user in the first 10 minutes and the last 10 minutes is 1 frame per second. The frame rate set by the user in minutes is 1 frame taken every 0.5 seconds. Later, in the process of recording the time-lapse video, the mobile phone can dynamically adjust the corresponding frame sampling frequency in different recording time periods according to the frame sampling frequency set by the user, and the embodiment of the present application does not impose any limitation on this.
S305、响应于用户在延时摄影模式中执行的停止录制操作,手机将抽取到的M帧拍摄画面编码为延时摄影视频,M帧拍摄画面包括上述X帧拍摄画面和Y帧拍摄画面。S305. In response to the recording stop operation performed by the user in the time-lapse photography mode, the mobile phone encodes the extracted M-frame shooting picture into a time-lapse photography video, and the M-frame shooting picture includes the foregoing X-frame shooting picture and Y-frame shooting picture.
当用户希望结束本次延时摄影时,如图12所示,用户可再次点击预览界面401中的录制按钮404。响应于用户点击录制按钮404的停止录制操作,手机可将此时抽取到的M帧拍摄画面按时间顺序编码为本次延时摄影视频。其中,抽取到的M帧拍摄画面中包括手机在不同录制时间段内按照相应的抽帧频率抽取到的各个拍摄画面。When the user wants to end this time-lapse photography, as shown in FIG. 12, the user can click the record button 404 in the preview interface 401 again. In response to the user clicking the record button 404 to stop the recording operation, the mobile phone may encode the M frames of shooting pictures extracted at this time into this time-lapse photography video in chronological order. Among them, the extracted M frames of shooting pictures include the respective shooting pictures extracted by the mobile phone according to the corresponding frame sampling frequency in different recording time periods.
示例性的,用户点击录制按钮404时,如果手机一共抽取到300帧拍摄画面,则手机可按照预设帧率对这300帧拍摄画面进行编码。以预设帧率为30帧/秒举例,手机可将抽取到的300帧拍摄画面按照30帧/秒的帧率编码为长度为10秒的延时摄影视频。Exemplarily, when the user clicks the record button 404, if the mobile phone extracts a total of 300 frames of shooting images, the mobile phone may encode the 300 frames of shooting images according to the preset frame rate. Taking the preset frame rate of 30 frames per second as an example, the mobile phone can encode the extracted 300 frames of shooting images into a time-lapse video of 10 seconds at a frame rate of 30 frames per second.
上述实施例中是以手机通过计算拍摄画面的光流强度为例提供的一种动态调整延时摄影时抽帧频率的方法。在本申请的另一些实施例中,可预先在手机中存储不同拍摄场景与不同抽帧频率之间的对应关系。例如,当拍摄场景为天空场景等景物变化较为缓慢的场景时,对应的抽帧频率为每2秒抽取一帧;当拍摄场景为飞鸟场景等景物变化较为快速的场景时,对应的抽帧频率为每0.5秒抽取一帧。In the foregoing embodiment, a method for dynamically adjusting the frame rate during time-lapse photography is provided by the mobile phone by calculating the optical flow intensity of the captured image as an example. In other embodiments of the present application, the correspondence between different shooting scenes and different frame sampling frequencies may be stored in the mobile phone in advance. For example, when the shooting scene is a scene where the scene changes slowly, such as a sky scene, the corresponding frame extraction frequency is one frame every 2 seconds; when the shooting scene is a scene where the scene changes quickly, such as a bird scene, the corresponding frame extraction frequency Take one frame every 0.5 seconds.
这样,当手机开始录制延时摄影视频后,手机可实时识别采集到的拍摄画面中正在拍摄的拍摄场景。例如,手机可通过图像分析或AI(artificial intelligence,人工智能)识别等算法 识别拍摄画面中的拍摄场景。如图13所示,如果手机识别出第1秒至第10秒内的拍摄场景均为天空场景,则手机可在第1秒至第10秒采集的拍摄画面中,以每2秒抽取一帧拍摄画面的第一抽帧频率抽取拍摄画面。如果手机识别出第10秒至第14秒内的拍摄场景均为飞鸟场景,则手机可在第11秒至第14秒采集的拍摄画面中,以每0.5秒抽取一帧拍摄画面的第二抽帧频率抽取拍摄画面。最终,手机可将抽取到的拍摄画面编码为本次延时摄影视频。In this way, when the mobile phone starts to record the time-lapse photography video, the mobile phone can recognize in real time the shooting scene being shot in the captured shooting screen. For example, the mobile phone can recognize the shooting scene in the shooting screen through image analysis or AI (artificial intelligence) recognition algorithms. As shown in Figure 13, if the mobile phone recognizes that the shooting scenes from the 1st to the 10th second are all sky scenes, the mobile phone can extract one frame every 2 seconds from the shooting screen captured from the 1st to the 10th second The first frame frequency of the shooting picture is to extract the shooting picture. If the mobile phone recognizes that the shooting scenes from the 10th to the 14th second are all flying bird scenes, the mobile phone can extract the second shot of the shooting frame every 0.5 seconds from the shooting images collected from the 11th to the 14th second. The frame rate extracts the captured image. In the end, the mobile phone can encode the extracted shooting picture into this time-lapse video.
也就是说,手机在录制延时摄影视频时,可在不同的拍摄场景下使用不同的抽帧频率从手机录制的拍摄画面中抽取形成延时摄影视频的拍摄画面,使得最终形成的延时摄影视频可准确呈现出不同拍摄场景下景物变化的动态过程,提高延时摄影视频的拍摄效果和用户的使用体验。That is to say, when the mobile phone records time-lapse photography video, it can use different frame extraction frequencies in different shooting scenes to extract the shooting images of the time-lapse photography video from the shooting images recorded by the mobile phone, so that the final time-lapse photography is formed Video can accurately present the dynamic process of scene changes in different shooting scenes, improving the shooting effect of time-lapse video and the user experience.
又或者,在本申请的另一些实施例中,当手机开始录制延时摄影视频后,手机可实时识别采集到的拍摄画面中的拍摄目标,以及拍摄目标的运动速度。如图14所示,如果手机识别出第1秒至第5秒中拍摄画面的拍摄目标为人物1401,且人物1401的运动速度小于预设值,那么,手机可在第1秒至第5秒采集的拍摄画面中,以每2秒抽取一帧拍摄画面的第一抽帧频率抽取拍摄画面。如果手机识别出第5秒至第10秒中拍摄画面的拍摄目标为人物1402,且人物1402的运动速度大于预设值,那么,手机可在第5秒至第10秒采集的拍摄画面中,以每0.5秒抽取一帧拍摄画面的第二抽帧频率抽取拍摄画面。最终,手机可将抽取到的拍摄画面编码为本次延时摄影视频。Or, in some other embodiments of the present application, after the mobile phone starts to record the time-lapse photography video, the mobile phone can recognize the shooting target in the captured shooting frame in real time and the movement speed of the shooting target. As shown in Figure 14, if the mobile phone recognizes that the shooting target of the picture taken from the first second to the fifth second is the person 1401, and the movement speed of the person 1401 is less than the preset value, then the mobile phone can be used in the first second to the fifth second Among the captured shooting images, the shooting images are extracted at the first frame rate at which one frame of the shooting images is extracted every 2 seconds. If the mobile phone recognizes that the shooting target of the shooting frame from the 5th to the 10th second is the person 1402, and the movement speed of the person 1402 is greater than the preset value, then the mobile phone can capture the shooting frame from the 5th to the 10th second. The captured image is extracted at the second frame rate at which one frame of the captured image is extracted every 0.5 seconds. In the end, the mobile phone can encode the extracted shooting picture into this time-lapse video.
需要说明的是,上述拍摄目标可以是用户手动选择的,例如,用户可以通过手动对焦的方式在预览界面显示的拍摄画面中选择拍摄目标。或者,上述拍摄目标也可以是手机通过图像分析或AI识别等算法自动识别出来的,本申请实施例对此不做任何限制。It should be noted that the foregoing shooting target may be manually selected by the user. For example, the user may select the shooting target in the shooting screen displayed on the preview interface by manual focusing. Alternatively, the aforementioned shooting target may also be automatically recognized by the mobile phone through image analysis or AI recognition and other algorithms, and the embodiment of the present application does not impose any limitation on this.
需要说明的是,上述实施例中是以拍摄一次日出过程的延时摄影视频举例说明的,可以理解的是,手机还可以使用上述延时摄影的录制方法录制日落过程、花开过程或花落过程的延时摄影视频,本申请实施例对此不做任何限制。It should be noted that in the above embodiment, a time-lapse photography video of a sunrise process is taken as an example. It is understandable that the mobile phone can also use the above-mentioned time-lapse photography recording method to record the sunset process, the blooming process, or the flower blooming process. For the time-lapse photography video of the falling process, the embodiment of this application does not impose any limitation on this.
以拍摄一次花开过程的延时摄影视频举例,手机开始录制每一帧拍摄画面后,手机可确定每次采集到的相邻两帧拍摄画面之间的光流强度Q。一般,在花朵开放前的等待过程中拍摄画面中拍摄内容的变化幅度不会太大,此时相邻两帧拍摄画面之间的光流强度Q一般较小。而在花朵开放时拍摄画面中拍摄内容的变化幅度会增大,此时相邻两帧拍摄画面之间的光流强度Q一般较大。那么,如果在T1时间内检测到相邻两帧拍摄画面之间的光流强度Q均小于第一阈值,说明拍摄画面中景物的变化不太明显,则手机可按照取值较小的第一抽帧频率抽取X帧拍摄画面;如果在T2时间内检测到相邻两帧拍摄画面之间的光流强度Q均大于或等于第一阈值,拍摄画面中景物的变化较为明显,则手机可按照取值较大的第二抽帧频率抽取Y帧拍摄画面。这样,最终形成的延时摄影视频可准确呈现花开过程中花蕾开放的动态过程,从而提高延时摄影视频的拍摄效果和用户的使用体验。Take a time-lapse photography video of a flower blooming process as an example. After the mobile phone starts to record each frame of the shooting picture, the phone can determine the optical flow intensity Q between the two adjacent frames of the shooting picture collected each time. Generally, during the waiting process before the flower blooms, the variation range of the shooting content in the shooting picture will not be too large. At this time, the optical flow intensity Q between two adjacent shooting pictures is generally small. When the flowers are blooming, the variation range of the shooting content in the shooting images will increase. At this time, the optical flow intensity Q between two adjacent shooting images is generally larger. Then, if it is detected within T1 that the optical flow intensity Q between two adjacent frames of shooting pictures is less than the first threshold, it means that the change of the scene in the shooting picture is not obvious, then the mobile phone can follow the smaller value of the first The frame rate is used to extract X frames of shooting pictures; if it is detected within T2 that the optical flow intensity Q between two adjacent frames of shooting pictures is greater than or equal to the first threshold, the scene changes in the shooting picture are more obvious, then the mobile phone can follow The second sampling frequency with a larger value extracts Y frames of the shooting picture. In this way, the finally formed time-lapse photography video can accurately present the dynamic process of flower bud opening during the blooming process, thereby improving the shooting effect of the time-lapse photography video and the user experience.
当然,在拍摄一次花开过程的延时摄影视频时,手机除了根据相邻两帧拍摄画面之间的光流强度Q实时调整抽帧频率抽取摄画面外,还可以根据其他能够反映拍摄内容变化幅度的参数调整抽帧频率。例如,手机可将拍摄画面中的花朵作为拍摄目标,检测拍摄目标的运动速度。进而,手机可根据拍摄目标的运动速度实时调整抽帧频率抽取本次延时摄影视频的拍摄画面,本申请实施例对此不做任何限制。Of course, when shooting a time-lapse video of the flower blooming process, the mobile phone can not only adjust the frame frequency to extract the captured images according to the optical flow intensity Q between two adjacent frames of shooting images in real time, but also according to other changes that can reflect the changes in the shooting content. The amplitude parameter adjusts the frame rate. For example, the mobile phone can use flowers in the shooting screen as the shooting target to detect the speed of the shooting target. Furthermore, the mobile phone can adjust the frame sampling frequency in real time according to the motion speed of the shooting target to extract the shooting pictures of this time-lapse photography video, which is not limited in the embodiment of the present application.
也就是说,手机在录制延时摄影视频时,如果检测到拍摄目标的运动速度较慢,则手机可使用较慢的抽帧频率从手机录制的拍摄画面中抽取延时摄影视频中的拍摄画面;如果检测 到拍摄目标的运动速度较快,则手机可使用较高的抽帧频率从手机录制的拍摄画面中抽取延时摄影视频中的拍摄画面,从而将拍摄目标快速运动的部分重点呈现在延时摄影视频中。That is to say, when the mobile phone is recording a time-lapse photography video, if it detects that the movement speed of the shooting target is slow, the mobile phone can use a slower frame extraction frequency to extract the shooting pictures in the time-lapse video from the shooting pictures recorded by the mobile phone. ; If the fast moving speed of the shooting target is detected, the mobile phone can use a higher frame extraction frequency to extract the shooting pictures in the time-lapse photography video from the shooting pictures recorded by the mobile phone, so as to present the fast moving parts of the shooting target. Time-lapse video.
在本申请的另一些实施例中,手机在录制延时摄影视频的过程可保存摄像头采集到的每一帧拍摄画面,无需执行上述步骤S303-S305。而当用户停止录制延时摄影视频后,手机可通过执行上述步骤S303-S304动态调整不同录制时间段的抽帧频率,并在不同录制时间段内按照相应的抽帧频率抽取各个拍摄画面,形成本次的延时摄影视频,本申请实施例对此不做任何限制。In some other embodiments of the present application, the mobile phone can save each frame of the shooting picture collected by the camera during the process of recording the time-lapse video without performing the above steps S303-S305. After the user stops recording the time-lapse photography video, the mobile phone can dynamically adjust the frame drawing frequency of different recording time periods by performing the above steps S303-S304, and extract each shooting picture according to the corresponding frame drawing frequency during different recording time periods to form This time-lapse photography video does not impose any restriction on this embodiment of the application.
S306、响应于用户打开上述延时摄影视频的操作,手机按照预设帧率播放上述延时摄影视频。S306. In response to the user's operation of opening the aforementioned time-lapse photography video, the mobile phone plays the aforementioned time-lapse photography video at a preset frame rate.
用户结束本次延时摄影后,手机可将步骤S305得到的延时摄影视频存储在手机的相册应用(也可称为图库应用)中。如图15所示,用户在相册应用中预览该延时摄影视频1501时,如果检测到用户点击延时摄影视频1501上的播放按钮1502,手机可按照编码该延时摄影视频1501时的帧率(例如30帧/秒)播放延时摄影视频1501。After the user finishes this time-lapse photography, the mobile phone can store the time-lapse photography video obtained in step S305 in the photo album application (also referred to as a gallery application) of the mobile phone. As shown in Figure 15, when the user previews the time-lapse video 1501 in the album application, if it is detected that the user clicks the play button 1502 on the time-lapse video 1501, the mobile phone can encode the time-lapse video 1501 at the frame rate (For example, 30 frames/second) play the time-lapse video 1501.
在播放延时摄影视频1501的过程中,由于手机在景物变化较为剧烈的拍摄画面中使用了更高的抽帧频率抽取拍摄画面放入延时摄影视频1501中,因此延时摄影视频1501中展现这一景物变化的拍摄画面更多。例如,手机在录制延时摄影视频1501时在太阳升起的时间段内将抽帧频率从每1秒抽取一帧提高为每0.5秒抽取一帧,那么,手机最终抽取到的太阳升起的拍摄画面的数目更多,抽取到的这些拍摄画面能够更加生动、准确的反映出太阳升起时的细节变化,使得延时摄影视频1501在播放时能够重点呈现出景物发生明显变化时的动态过程,从而提高延时摄影的录制效果和用户的使用体验。In the process of playing the time-lapse video 1501, because the mobile phone uses a higher frame rate in the shooting pictures with more dramatic changes in the scene, the shooting pictures are extracted into the time-lapse video 1501, so the time-lapse video 1501 is displayed There are more shots of this scene change. For example, when the mobile phone is recording a time-lapse photography video 1501 during the time period when the sun rises, the frame rate is increased from one frame every 1 second to one frame every 0.5 second. Then, the phone finally picked up the rising sun There are more shooting images, and the extracted shooting images can more vividly and accurately reflect the changes in details when the sun rises, so that the time-lapse photography video 1501 can focus on the dynamic process when the scene changes significantly during playback. , Thereby improving the recording effect of time-lapse photography and the user experience.
如图16所示,本申请实施例公开了一种电子设备,该电子设备可用于实现以上各个方法实施例中记载的方法。该电子设备具体可以包括:采集模块1601、预览模块1602、光流分析模块1603、抽帧模块1604、编码模块1605以及播放模块1606。As shown in FIG. 16, an embodiment of the present application discloses an electronic device, which can be used to implement the methods described in each of the above method embodiments. The electronic device may specifically include: an acquisition module 1601, a preview module 1602, an optical flow analysis module 1603, a frame extraction module 1604, an encoding module 1605, and a playback module 1606.
其中,采集模块1601用于支持电子设备执行图3中的过程S301;预览模块1602支持电子设备执行图3中的过程S302;光流分析模块1603用于支持电子设备执行图3中的过程S303;抽帧模块1604用于支持电子设备执行图3中的过程S304a、S304b、……;编码模块1605用于支持电子设备执行图3中的过程S305;播放模块1606用于支持电子设备执行图3中的过程S306。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。Among them, the collection module 1601 is used to support the electronic device to perform the process S301 in FIG. 3; the preview module 1602 is used to support the electronic device to perform the process S302 in FIG. 3; the optical flow analysis module 1603 is used to support the electronic device to perform the process S303 in FIG. 3; The frame extraction module 1604 is used to support the electronic device to perform the processes S304a, S304b, ... in FIG. 3; the encoding module 1605 is used to support the electronic device to perform the process S305 in FIG. 3; the playback module 1606 is used to support the electronic device to perform the processes in FIG. The process S306. Among them, all relevant content of each step involved in the above method embodiment can be cited in the function description of the corresponding function module, and will not be repeated here.
如图17所示,本申请实施例公开了一种电子设备,包括:触摸屏1701,所述触摸屏1701包括触敏表面1706和显示屏1707;一个或多个处理器1702;存储器1703;一个或多个摄像头1708;以及一个或多个计算机程序1704。上述各器件可以通过一个或多个通信总线1705连接。其中,上述一个或多个计算机程序1704被存储在上述存储器1703中并被配置为被该一个或多个处理器1702执行,该一个或多个计算机程序1704包括指令,上述指令可以用于执行上述应实施例中的各个步骤。As shown in FIG. 17, an embodiment of the present application discloses an electronic device, including: a touch screen 1701, the touch screen 1701 includes a touch-sensitive surface 1706 and a display screen 1707; one or more processors 1702; memory 1703; one or more One camera 1708; and one or more computer programs 1704. The aforementioned devices may be connected via one or more communication buses 1705. The aforementioned one or more computer programs 1704 are stored in the aforementioned memory 1703 and are configured to be executed by the one or more processors 1702, and the one or more computer programs 1704 include instructions, and the aforementioned instructions can be used to execute the aforementioned Each step in the embodiment should be implemented.
示例性的,上述处理器1702具体可以为图1所示的处理器110,上述存储器1703具体可以为图1所示的内部存储器121和/或外部存储器120,上述显示屏1707具体可以为图1所示的显示屏194,上述摄像头1708具体可以为图1所示的摄像头193,上述触敏表面1706具体可以为图1所示的传感器模块180中的触摸传感器,本申请实施例对此不做任何限制。Exemplarily, the foregoing processor 1702 may specifically be the processor 110 shown in FIG. 1, the foregoing memory 1703 may specifically be the internal memory 121 and/or the external memory 120 shown in FIG. 1, and the foregoing display screen 1707 may specifically be FIG. As shown in the display screen 194, the aforementioned camera 1708 may specifically be the camera 193 shown in FIG. 1, and the aforementioned touch-sensitive surface 1706 may specifically be a touch sensor in the sensor module 180 shown in FIG. 1, which is not done in this embodiment of the application. Any restrictions.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和 简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Through the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated as needed. It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. For the specific working process of the system, device, and unit described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not repeated here.
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。The functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application essentially or the part that contributes to the prior art or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage The medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited to this. Any changes or substitutions within the technical scope disclosed in the embodiments of the present application shall be covered by this Within the protection scope of the application embodiments. Therefore, the protection scope of the embodiments of the present application should be subject to the protection scope of the claims.

Claims (22)

  1. 一种延时摄影的录制方法,其特征在于,包括:A recording method for time-lapse photography, characterized in that it comprises:
    电子设备显示延时摄影模式的预览界面;The electronic device displays the preview interface of the time-lapse photography mode;
    响应于用户执行的录制操作,所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面;In response to the recording operation performed by the user, the electronic device starts to record each frame of the shooting picture captured by the camera;
    所述电子设备使用第一抽帧频率从第一录制时间段内采集到的N1帧拍摄画面中抽取X帧拍摄画面,X<N1;The electronic device uses the first frame sampling frequency to extract X frames of shooting images from N1 frames of shooting images collected during the first recording time period, X<N1;
    所述电子设备使用第二抽帧频率从第二录制时间段内采集到的N2帧拍摄画面中抽取Y帧拍摄画面,Y<N2,所述第二抽帧频率与所述第一抽帧频率不相同,所述第二录制时间段与所述第一录制时间段不重叠;The electronic device uses the second frame sampling frequency to extract Y frames of shooting images from N2 frames of shooting images collected during the second recording time period, Y<N2, and the second frame sampling frequency is the same as the first frame sampling frequency Not the same, the second recording time period does not overlap with the first recording time period;
    响应于用户执行的停止录制操作,所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频,所述M帧拍摄画面中包括所述X帧拍摄画面和所述Y帧拍摄画面。In response to the recording stop operation performed by the user, the electronic device encodes the extracted M frame shooting pictures into a time-lapse photography video, and the M frame shooting pictures include the X frame shooting pictures and the Y frame shooting pictures.
  2. 根据权利要求1所述的方法,其特征在于,在所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,还包括:The method according to claim 1, characterized in that, after the electronic device starts recording each frame captured by the camera, the method further comprises:
    若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than a threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency;
    若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,所述第二抽帧频率大于所述第一抽帧频率。If it is detected that the variation range of the captured content in the captured image during the second recording period is greater than or equal to the threshold, the electronic device determines that the frame drawing frequency in the second recording period is the second sampling. Frame frequency, the second frame sampling frequency is greater than the first frame sampling frequency.
  3. 根据权利要求2所述的方法,其特征在于,在所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,还包括:The method according to claim 2, characterized in that, after the electronic device starts to record each frame captured by the camera, the method further comprises:
    所述电子设备计算每次采集到的相邻两帧拍摄画面之间的光流强度,所述光流强度用于反映相邻两帧拍摄画面中拍摄内容的变化幅度;The electronic device calculates the intensity of the optical flow between two adjacent frames of shooting images collected each time, and the optical flow intensity is used to reflect the variation range of the shooting content in the adjacent two frames of shooting images;
    其中,若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率,包括:Wherein, if it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than a threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency ,include:
    若检测到所述第一录制时间段内相邻两帧拍摄画面之间的光流强度均小于第一预设值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the optical flow intensity between two adjacent frames of shooting images in the first recording time period is less than a first preset value, the electronic device determines that the frame drawing frequency in the first recording time period is The first frame sampling frequency;
    其中,若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,包括:Wherein, if it is detected that the variation range of the shot content in the shooting picture during the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame sampling frequency in the second recording time period is the first Two frame rate, including:
    若检测到所述第二录制时间段内相邻两帧拍摄画面之间的光流强度均大于或等于所述第一预设值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率。If it is detected that the optical flow intensity between two adjacent frames of captured images in the second recording time period is greater than or equal to the first preset value, the electronic device determines The frame sampling frequency is the second frame sampling frequency.
  4. 根据权利要求2所述的方法,其特征在于,The method according to claim 2, wherein:
    若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率,包括:If it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than a threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency, including :
    若检测到所述第一录制时间段内采集到的N1帧拍摄画面属于预设的第一拍摄场景,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the N1 frames of shooting pictures collected in the first recording time period belong to the preset first shooting scene, the electronic device determines that the frame drawing frequency in the first recording time period is the first Frame rate
    若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,包括:If it is detected that the variation range of the captured content in the captured image during the second recording period is greater than or equal to the threshold, the electronic device determines that the frame drawing frequency in the second recording period is the second sampling. Frame frequency, including:
    若检测到所述第二录制时间段内采集到的N2帧拍摄画面属于预设的第二拍摄场景,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率。If it is detected that the N2 frames of shooting pictures collected in the second recording time period belong to the preset second shooting scene, the electronic device determines that the frame sampling frequency in the second recording time period is the second Frame rate.
  5. 根据权利要求2所述的方法,其特征在于,The method according to claim 2, wherein:
    若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率,包括:If it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than a threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency, including :
    若检测到所述第一录制时间段内拍摄目标的运动速度小于第二预设值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the moving speed of the shooting target in the first recording time period is less than a second preset value, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency;
    若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,包括:If it is detected that the variation range of the captured content in the captured image during the second recording period is greater than or equal to the threshold, the electronic device determines that the frame drawing frequency in the second recording period is the second sampling. Frame frequency, including:
    若检测到所述第二录制时间段内拍摄目标的运动速度大于或等于所述第二预设值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率。If it is detected that the movement speed of the shooting target in the second recording time period is greater than or equal to the second preset value, the electronic device determines that the frame sampling frequency in the second recording time period is the second Frame rate.
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,在所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频之前,还包括:The method according to any one of claims 1 to 5, characterized in that, before the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, the method further comprises:
    所述电子设备使用第三抽帧频率从第三录制时间段内采集到的N3帧拍摄画面中抽取Z帧拍摄画面,所述第三抽帧频率与所述第二抽帧频率和所述第一抽帧频率均不相同,所述第三录制时间段与所述第二录制时间段和所述第一录制时间段均不重叠,所述M帧拍摄画面中包括所述Z帧拍摄画面。The electronic device uses a third frame sampling frequency to extract Z frames of shooting images from N3 frames of shooting images collected during the third recording time period. The third frame sampling frequency is compared with the second frame sampling frequency and the first frame sampling frequency. The frequency of one frame is different, the third recording time period does not overlap with the second recording time period and the first recording time period, and the M frame shooting picture includes the Z frame shooting picture.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,在所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,还包括:The method according to any one of claims 1 to 6, characterized in that, after the electronic device starts to record each frame captured by the camera, the method further comprises:
    所述电子设备在所述预览画面中实时显示正在录制的每一帧拍摄画面。The electronic device displays each frame of the shooting picture being recorded in the preview picture in real time.
  8. 根据权利要求7所述的方法,其特征在于,当所述电子设备在所述预览画面中实时显示正在录制的每一帧拍摄画面时,还包括:8. The method according to claim 7, wherein when the electronic device displays each frame of the shooting picture being recorded in the preview picture in real time, the method further comprises:
    所述电子设备在所述预览画面中实时显示当前的录制时长,以及与所述录制时长对应的延时摄影视频的播放时长。The electronic device displays the current recording duration and the playback duration of the time-lapse photography video corresponding to the recording duration in the preview screen in real time.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频,包括:The method according to any one of claims 1-8, wherein the electronic device encoding the extracted M-frame shooting picture into a time-lapse photography video, comprising:
    所述电子设备按照预设帧率将抽取到的M帧拍摄画面编码为本次延时摄影的延时摄影视频。The electronic device encodes the extracted M-frame shooting picture according to the preset frame rate as the time-lapse video of this time-lapse photography.
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,在所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频之后,还包括:The method according to any one of claims 1-9, characterized in that, after the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, the method further comprises:
    响应于用户打开所述延时摄影视频的操作,所述电子设备按照预设帧率播放所述延时摄影视频。In response to the user's operation of opening the time-lapse video, the electronic device plays the time-lapse video according to a preset frame rate.
  11. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    触摸屏,所述触摸屏包括触敏表面和显示屏;A touch screen, the touch screen including a touch-sensitive surface and a display screen;
    一个或多个处理器;One or more processors;
    一个或多个存储器;One or more memories;
    一个或多个摄像头;One or more cameras;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行以下步骤:And one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, and the one or more computer programs include instructions, when the instructions are executed by the electronic device When the time, the electronic device is caused to perform the following steps:
    显示延时摄影模式的预览界面;Display the preview interface of time-lapse photography mode;
    响应于用户执行的录制操作,开始录制所述摄像头捕捉到的每一帧拍摄画面;In response to the recording operation performed by the user, start to record each frame of the shooting picture captured by the camera;
    使用第一抽帧频率从第一录制时间段内采集到的N1帧拍摄画面中抽取X帧拍摄画面, X<N1;Use the first frame drawing frequency to extract X frames of shooting pictures from N1 frames of shooting pictures collected in the first recording time period, X<N1;
    使用第二抽帧频率从第二录制时间段内采集到的N2帧拍摄画面中抽取Y帧拍摄画面,Y<N2,所述第二抽帧频率与所述第一抽帧频率不相同,所述第二录制时间段与所述第一录制时间段不重叠;Use the second frame drawing frequency to extract Y frames of shooting pictures from the N2 frames of shooting pictures collected in the second recording time period, Y<N2, the second frame drawing frequency is different from the first frame drawing frequency, so The second recording time period does not overlap with the first recording time period;
    响应于用户执行的停止录制操作,将抽取到的M帧拍摄画面编码为延时摄影视频,所述M帧拍摄画面中包括所述X帧拍摄画面和所述Y帧拍摄画面。In response to the stop recording operation performed by the user, the extracted M-frame shooting pictures are encoded into a time-lapse photography video, and the M-frame shooting pictures include the X-frame shooting pictures and the Y-frame shooting pictures.
  12. 根据权利要求11所述的电子设备,其特征在于,在所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,所述电子设备还用于执行:11. The electronic device according to claim 11, wherein after the electronic device starts to record each frame captured by the camera, the electronic device is further configured to execute:
    若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the variation amplitude of the shot content in the shooting image during the first recording time period is less than a threshold value, determining that the frame drawing frequency in the first recording time period is the first frame drawing frequency;
    若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,所述第二抽帧频率大于所述第一抽帧频率。If it is detected that the variation range of the shooting content in the shooting image during the second recording time period is greater than or equal to the threshold, it is determined that the frame drawing frequency in the second recording time period is the second frame drawing frequency, so The second frame sampling frequency is greater than the first frame sampling frequency.
  13. 根据权利要求12所述的电子设备,其特征在于,在所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,所述电子设备还用于执行:The electronic device according to claim 12, wherein after the electronic device starts to record each frame of the shooting picture captured by the camera, the electronic device is further configured to execute:
    计算每次采集到的相邻两帧拍摄画面之间的光流强度,所述光流强度用于反映相邻两帧拍摄画面中拍摄内容的变化幅度;Calculate the intensity of the optical flow between two adjacent frames of shooting images collected each time, where the optical flow intensity is used to reflect the variation range of the shooting content in the adjacent two frames of shooting images;
    其中,若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率,具体包括:Wherein, if it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than a threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency , Specifically including:
    若检测到所述第一录制时间段内相邻两帧拍摄画面之间的光流强度均小于第一预设值,则确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the optical flow intensity between two adjacent frames of shooting images in the first recording time period is less than the first preset value, it is determined that the frame drawing frequency in the first recording time period is the first Frame rate
    其中,若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,具体包括:Wherein, if it is detected that the variation range of the shot content in the shooting picture during the second recording time period is greater than or equal to the threshold, the electronic device determines that the frame sampling frequency in the second recording time period is the first Two frame rate, including:
    若检测到所述第二录制时间段内相邻两帧拍摄画面之间的光流强度均大于或等于所述第一预设值,则确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率。If it is detected that the optical flow intensity between two adjacent frames of shooting pictures in the second recording time period is greater than or equal to the first preset value, it is determined that the frame drawing frequency in the second recording time period is The second frame rate.
  14. 根据权利要求12所述的电子设备,其特征在于,The electronic device according to claim 12, wherein:
    若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率,具体包括:If it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than the threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency, specifically include:
    若检测到所述第一录制时间段内采集到的N1帧拍摄画面属于预设的第一拍摄场景,则所述确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the N1 frames of shooting pictures collected during the first recording time period belong to the preset first shooting scene, the determining that the frame drawing frequency in the first recording time period is the first frame drawing frequency;
    若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,具体包括:If it is detected that the variation range of the captured content in the captured image during the second recording period is greater than or equal to the threshold, the electronic device determines that the frame drawing frequency in the second recording period is the second sampling. Frame frequency, including:
    若检测到所述第二录制时间段内采集到的N2帧拍摄画面属于预设的第二拍摄场景,则确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率。If it is detected that the N2 frames of shooting pictures collected in the second recording time period belong to the preset second shooting scene, it is determined that the frame drawing frequency in the second recording time period is the second frame drawing frequency.
  15. 根据权利要求12所述的电子设备,其特征在于,The electronic device according to claim 12, wherein:
    若检测到所述第一录制时间段内拍摄画面中拍摄内容的变化幅度小于阈值,则所述电子设备确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率,具体包括:If it is detected that the variation amplitude of the shot content in the shooting picture during the first recording time period is less than the threshold, the electronic device determines that the frame drawing frequency in the first recording time period is the first frame drawing frequency, specifically include:
    若检测到所述第一录制时间段内拍摄目标的运动速度小于第二预设值,则确定所述第一录制时间段内的抽帧频率为所述第一抽帧频率;If it is detected that the movement speed of the shooting target in the first recording time period is less than a second preset value, determining that the frame drawing frequency in the first recording time period is the first frame drawing frequency;
    若检测到所述第二录制时间段内拍摄画面中拍摄内容的变化幅度大于或等于所述阈值,则所述电子设备确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率,具体包括:If it is detected that the variation range of the captured content in the captured image during the second recording period is greater than or equal to the threshold, the electronic device determines that the frame drawing frequency in the second recording period is the second sampling. Frame frequency, including:
    若检测到所述第二录制时间段内拍摄目标的运动速度大于或等于所述第二预设值,则确定所述第二录制时间段内的抽帧频率为所述第二抽帧频率。If it is detected that the moving speed of the shooting target in the second recording time period is greater than or equal to the second preset value, it is determined that the frame drawing frequency in the second recording time period is the second frame drawing frequency.
  16. 根据权利要求11-15中任一项所述的电子设备,其特征在于,在所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频之前,所述电子设备还用于执行:The electronic device according to any one of claims 11-15, wherein before the electronic device encodes the extracted M-frame shooting picture into a time-lapse video, the electronic device is further configured to execute:
    使用第三抽帧频率从第三录制时间段内采集到的N3帧拍摄画面中抽取Z帧拍摄画面,所述第三抽帧频率与所述第二抽帧频率和所述第一抽帧频率均不相同,所述第三录制时间段与所述第二录制时间段和所述第一录制时间段均不重叠,所述M帧拍摄画面中包括所述Z帧拍摄画面。Use the third frame sampling frequency to extract Z frames of shooting images from the N3 frames of shooting images collected during the third recording time period, the third frame sampling frequency is the same as the second frame sampling frequency and the first frame sampling frequency All are different, the third recording time period does not overlap with the second recording time period and the first recording time period, and the M frames of shooting pictures include the Z frames of shooting pictures.
  17. 根据权利要求11-16中任一项所述的电子设备,其特征在于,在所述电子设备开始录制摄像头捕捉到的每一帧拍摄画面之后,所述电子设备还用于执行:15. The electronic device according to any one of claims 11-16, wherein after the electronic device starts to record each frame captured by the camera, the electronic device is further configured to execute:
    所述电子设备在所述预览画面中实时显示正在录制的每一帧拍摄画面。The electronic device displays each frame of the shooting picture being recorded in the preview picture in real time.
  18. 根据权利要求17所述的电子设备,其特征在于,当所述电子设备在所述预览画面中实时显示正在录制的每一帧拍摄画面时,所述电子设备还用于执行:The electronic device according to claim 17, wherein when the electronic device displays each frame of the shooting picture being recorded in the preview screen in real time, the electronic device is further configured to execute:
    在所述预览画面中实时显示当前的录制时长,以及与所述录制时长对应的延时摄影视频的播放时长。The current recording duration and the playback duration of the time-lapse photography video corresponding to the recording duration are displayed in the preview screen in real time.
  19. 根据权利要求11-18中任一项所述的电子设备,其特征在于,所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频,具体包括:The electronic device according to any one of claims 11-18, wherein the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, which specifically comprises:
    按照预设帧率将抽取到的M帧拍摄画面编码为本次延时摄影的延时摄影视频。The extracted M-frame shooting pictures are encoded according to the preset frame rate as the time-lapse video of this time-lapse photography.
  20. 根据权利要求11-19中任一项所述的电子设备,其特征在于,在所述电子设备将抽取到的M帧拍摄画面编码为延时摄影视频之后,所述电子设备还用于执行:The electronic device according to any one of claims 11-19, wherein after the electronic device encodes the extracted M-frame shooting picture into a time-lapse video, the electronic device is further configured to execute:
    响应于用户打开所述延时摄影视频的操作,按照预设帧率播放所述延时摄影视频。In response to the user's operation of opening the time-lapse photography video, the time-lapse photography video is played according to a preset frame rate.
  21. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-10中任一项所述的延时摄影的录制方法。A computer-readable storage medium having instructions stored in the computer-readable storage medium, characterized in that, when the instructions are executed on an electronic device, the electronic device is caused to execute any one of claims 1-10 The recording method of time-lapse photography described in the item.
  22. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-10中任一项所述的延时摄影的录制方法。A computer program product containing instructions, characterized in that, when the computer program product runs on an electronic device, the electronic device is caused to perform the recording of time-lapse photography according to any one of claims 1-10 method.
PCT/CN2020/079402 2019-03-25 2020-03-14 Recording method for time-lapse photography, and electronic device WO2020192461A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910229645.5A CN110086985B (en) 2019-03-25 2019-03-25 Recording method for delayed photography and electronic equipment
CN201910229645.5 2019-03-25

Publications (1)

Publication Number Publication Date
WO2020192461A1 true WO2020192461A1 (en) 2020-10-01

Family

ID=67413619

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079402 WO2020192461A1 (en) 2019-03-25 2020-03-14 Recording method for time-lapse photography, and electronic device

Country Status (2)

Country Link
CN (1) CN110086985B (en)
WO (1) WO2020192461A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643728A (en) * 2021-08-12 2021-11-12 荣耀终端有限公司 Audio recording method, electronic device, medium, and program product
CN114390236A (en) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 Video processing method, video processing device, computer equipment and storage medium
CN114615421A (en) * 2020-12-07 2022-06-10 华为技术有限公司 Image processing method and electronic device

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110086985B (en) * 2019-03-25 2021-03-30 华为技术有限公司 Recording method for delayed photography and electronic equipment
CN112532859B (en) * 2019-09-18 2022-05-31 华为技术有限公司 Video acquisition method and electronic equipment
CN112532857B (en) * 2019-09-18 2022-04-12 华为技术有限公司 Shooting method and equipment for delayed photography
CN112887583B (en) * 2019-11-30 2022-07-22 华为技术有限公司 Shooting method and electronic equipment
CN111294509A (en) * 2020-01-22 2020-06-16 Oppo广东移动通信有限公司 Video shooting method, device, terminal and storage medium
CN113225490B (en) * 2020-02-04 2024-03-26 Oppo广东移动通信有限公司 Time-delay photographing method and photographing device thereof
CN111240184B (en) * 2020-02-21 2021-12-31 华为技术有限公司 Method for determining clock error, terminal and computer storage medium
CN111526281B (en) * 2020-03-25 2021-06-25 东莞市至品创造数码科技有限公司 Method and device for calculating time length of delayed photographic image
CN111464760A (en) * 2020-05-06 2020-07-28 Oppo(重庆)智能科技有限公司 Dynamic image generation method and device and terminal equipment
CN113747049B (en) * 2020-05-30 2023-01-13 华为技术有限公司 Shooting method and equipment for delayed photography
WO2022021128A1 (en) * 2020-07-29 2022-02-03 深圳市大疆创新科技有限公司 Image processing method, electronic device, camera and readable storage medium
CN112637528B (en) * 2020-12-21 2023-12-29 维沃移动通信有限公司 Picture processing method and device
CN112702607B (en) * 2020-12-25 2022-11-22 深圳大学 Intelligent video compression method and device based on optical flow decision
CN113726949B (en) * 2021-05-31 2022-08-26 荣耀终端有限公司 Video processing method, electronic device and storage medium
CN113810596B (en) * 2021-07-27 2023-01-31 荣耀终端有限公司 Time-delay shooting method and device
CN113691721B (en) * 2021-07-28 2023-07-18 浙江大华技术股份有限公司 Method, device, computer equipment and medium for synthesizing time-lapse photographic video
CN115776532B (en) * 2021-09-07 2023-10-20 荣耀终端有限公司 Method for capturing images in video and electronic equipment
CN113556473B (en) * 2021-09-23 2022-02-08 深圳市天和荣科技有限公司 Shooting method and device for flower blooming process, electronic equipment and storage medium
CN114679607B (en) * 2022-03-22 2024-03-05 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN114827477B (en) * 2022-05-26 2024-03-29 维沃移动通信有限公司 Method, device, electronic equipment and medium for time-lapse photography
CN116708751B (en) * 2022-09-30 2024-02-27 荣耀终端有限公司 Method and device for determining photographing duration and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959539A (en) * 2016-05-09 2016-09-21 南京云恩通讯科技有限公司 Time-lapse photography method for automatically determining delay rate
US20160344927A1 (en) * 2015-05-21 2016-11-24 Apple Inc. Time Lapse User Interface Enhancements
CN107396019A (en) * 2017-08-11 2017-11-24 维沃移动通信有限公司 A kind of slow motion video method for recording and mobile terminal
CN108293123A (en) * 2015-12-22 2018-07-17 三星电子株式会社 The method and apparatus of image when for generating contracting
CN109068052A (en) * 2018-07-24 2018-12-21 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN110086985A (en) * 2019-03-25 2019-08-02 华为技术有限公司 A kind of method for recording and electronic equipment of time-lapse photography

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9247098B2 (en) * 2013-04-09 2016-01-26 Here Global B.V. Automatic time lapse capture
CN104539864B (en) * 2014-12-23 2018-02-02 小米科技有限责任公司 The method and apparatus for recording image
US10187607B1 (en) * 2017-04-04 2019-01-22 Gopro, Inc. Systems and methods for using a variable capture frame rate for video capture
CN107197162B (en) * 2017-07-07 2020-11-13 盯盯拍(深圳)技术股份有限公司 Shooting method, shooting device, video storage equipment and shooting terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160344927A1 (en) * 2015-05-21 2016-11-24 Apple Inc. Time Lapse User Interface Enhancements
CN108293123A (en) * 2015-12-22 2018-07-17 三星电子株式会社 The method and apparatus of image when for generating contracting
CN105959539A (en) * 2016-05-09 2016-09-21 南京云恩通讯科技有限公司 Time-lapse photography method for automatically determining delay rate
CN107396019A (en) * 2017-08-11 2017-11-24 维沃移动通信有限公司 A kind of slow motion video method for recording and mobile terminal
CN109068052A (en) * 2018-07-24 2018-12-21 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN110086985A (en) * 2019-03-25 2019-08-02 华为技术有限公司 A kind of method for recording and electronic equipment of time-lapse photography

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615421A (en) * 2020-12-07 2022-06-10 华为技术有限公司 Image processing method and electronic device
CN114615421B (en) * 2020-12-07 2023-06-30 华为技术有限公司 Image processing method and electronic equipment
CN113643728A (en) * 2021-08-12 2021-11-12 荣耀终端有限公司 Audio recording method, electronic device, medium, and program product
CN113643728B (en) * 2021-08-12 2023-08-22 荣耀终端有限公司 Audio recording method, electronic equipment, medium and program product
CN114390236A (en) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 Video processing method, video processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110086985A (en) 2019-08-02
CN110086985B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
WO2020192461A1 (en) Recording method for time-lapse photography, and electronic device
WO2020186969A1 (en) Multi-path video recording method and device
WO2021052232A1 (en) Time-lapse photography method and device
WO2021052292A1 (en) Video acquisition method and electronic device
US11889180B2 (en) Photographing method and electronic device
WO2020078237A1 (en) Audio processing method and electronic device
WO2020238741A1 (en) Image processing method, related device and computer storage medium
CN110381276B (en) Video shooting method and electronic equipment
US20220245778A1 (en) Image bloom processing method and apparatus, and storage medium
CN113810596B (en) Time-delay shooting method and device
JP2022512125A (en) Methods and Electronic Devices for Taking Long Exposure Images
WO2021032117A1 (en) Photographing method and electronic device
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN115086567A (en) Time-delay shooting method and device
CN113593567B (en) Method for converting video and sound into text and related equipment
CN113726949B (en) Video processing method, electronic device and storage medium
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN115412678B (en) Exposure processing method and device and electronic equipment
WO2023077939A1 (en) Camera switching method and apparatus, and electronic device and storage medium
WO2022033344A1 (en) Video stabilization method, and terminal device and computer-readable storage medium
WO2022095752A1 (en) Frame demultiplexing method, electronic device and storage medium
CN115297269B (en) Exposure parameter determination method and electronic equipment
WO2023284591A1 (en) Video capture method and apparatus, electronic device, and storage medium
RU2780808C1 (en) Method for photographing and electronic apparatus
RU2789447C1 (en) Method and apparatus for multichannel video recording

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20777822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20777822

Country of ref document: EP

Kind code of ref document: A1