WO2020192461A1 - Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique - Google Patents

Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique Download PDF

Info

Publication number
WO2020192461A1
WO2020192461A1 PCT/CN2020/079402 CN2020079402W WO2020192461A1 WO 2020192461 A1 WO2020192461 A1 WO 2020192461A1 CN 2020079402 W CN2020079402 W CN 2020079402W WO 2020192461 A1 WO2020192461 A1 WO 2020192461A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
electronic device
shooting
time period
recording
Prior art date
Application number
PCT/CN2020/079402
Other languages
English (en)
Chinese (zh)
Inventor
陈绍君
杨丽霞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020192461A1 publication Critical patent/WO2020192461A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • This application relates to the field of terminal technology, and in particular to a recording method and electronic equipment for time-lapse photography.
  • Time-lapse photography can also be called time-lapse photography or time-lapse video. It is a shooting technique that can compress time and reproduce the process of slowly changing scenes in a short time.
  • a normal frame rate for example, 24 frames per second
  • the frame sampling frequency in the time-lapse photography function is preset when the mobile phone leaves the factory, or the frame sampling frequency can be manually set by the user. Once the frame rate is determined, the mobile phone will collect frames according to the set frame rate during time-lapse photography and form a time-lapse video.
  • the mobile phone will collect frames according to the set frame rate during time-lapse photography and form a time-lapse video.
  • using a fixed frame rate to extract each frame will make the rapid change of the flower blooming moment, sunrise moment and other fast-changing wonderful pictures less and shorter.
  • the pictures extracted are many and relatively long, which cannot highlight the wonderful parts of time-lapse photography, and the user experience is not high.
  • This application provides a recording method and electronic equipment for time-lapse photography, which can record time-lapse photography video with a dynamically changing frame rate, so as to highlight the highlights of the time-lapse photography and improve the user's use Experience.
  • this application provides a recording method for time-lapse photography, including: an electronic device displays a preview interface of the time-lapse photography mode; in response to a recording operation performed by the user (for example, the operation of clicking the recording button in the preview interface, etc.), The electronic device can start recording each frame captured by the camera; in this recording process, the electronic device can use the first frame rate to extract X(X) from the N1 frames captured during the first recording period.
  • the electronic device can use the second frame drawing frequency (the second frame drawing frequency is different from the first frame drawing frequency) to extract Y from the N2 frames of the shooting images collected during the second recording time period (Y ⁇ N2) frame shooting screen; in response to the user's stop recording operation (for example, the user clicks the record button again in the preview interface), the electronic device can extract the M frames of shooting screen (including The above-mentioned X-frame shooting picture and Y-frame shooting picture) are encoded as time-lapse photography video.
  • the electronic device can dynamically use different frame extraction frequencies to extract the shooting pictures being recorded to form this time-lapse photography video.
  • the time-lapse photography video produced by the electronic device can dynamically present the changes of the shooting content in different recording time periods, which improves the photography effect of the time-lapse photography and the user experience.
  • the electronic device after the electronic device starts to record each frame of the shooting image captured by the camera, it further includes: if it is detected that the variation of the shooting content in the shooting image during the first recording time period is less than the threshold, then the electronic device The device can determine that the frame sampling frequency in the first recording time period is the first frame sampling frequency; if it is detected that the change in the shooting content in the shooting image during the second recording time period is greater than or equal to the threshold, the electronic device can determine the second The frame sampling frequency in the recording period is the second frame sampling frequency, and the second frame sampling frequency is greater than the first frame sampling frequency.
  • the electronic device can select the corresponding frame extraction frequency according to the change of the shooting content in the shooting screen to extract the frames to form a time-lapse video.
  • the change of the shooting content in the shooting picture is not obvious, it means that the motion speed of the shooting content is relatively slow at this time, and the electronic device can use a lower frame drawing frequency to extract the frame picture.
  • the change of the shooting content in the shooting screen is more obvious, it means that the shooting content is moving faster at this time.
  • the electronic device can use a higher frame sampling frequency to extract more frames, and more frames can be more accurate Reflect the changes in the details of the shooting content, so that the time-lapse photography video recorded by the electronic device can focus on the highlights that change quickly, thereby improving the photography effect of the time-lapse photography and the user experience.
  • the electronic device after the electronic device starts to record each frame of the shooting picture captured by the camera, it further includes: the electronic device calculates the optical flow intensity between two adjacent frames of the shooting picture collected each time, and the optical flow intensity is used for Reflects the change range of the shooting content in two adjacent frames of shooting; then, if it is detected that the optical flow intensity between two adjacent frames of shooting during the first recording period is less than the first preset value, it indicates that the scene in the shooting screen If the change is not obvious, the electronic device can determine that the frame sampling frequency in the first recording period is the first frame sampling frequency with a smaller value; if it is detected that there is a gap between two adjacent frames in the second recording period The optical flow intensity of is greater than or equal to the first preset value, indicating that the scene changes in the shooting image are more obvious, and the electronic device can determine the frame sampling frequency in the second recording period as the second frame sampling frequency with a larger value .
  • the electronic device may determine that the frame sampling frequency in the first recording time period is the first frame sampling frequency; if it is detected that the N2 frames of shooting images collected during the second recording time period belong to the preset second shooting scene, the electronic device may determine that the frame sampling frequency in the second recording time period is the second frame sampling frequency.
  • the movement speed of the scene in the first shooting scene is slow, and the movement speed of the scene in the first shooting scene is faster. In this way, the time-lapse photography video finally produced by the electronic device can accurately present the dynamic process of scene changes in different shooting scenes, and improve the shooting effect of the time-lapse photography video and the user experience.
  • the electronic device may determine that the frame drawing frequency in the first recording time period is the first frame drawing frequency; if the first recording time period is detected 2.
  • the moving speed of the shooting target during the recording time period is greater than or equal to the second preset value, the electronic device can determine that the frame drawing frequency in the second recording time period is the second frame drawing frequency, so as to focus on the rapid movement of the shooting target Presented in time-lapse video.
  • the electronic device before the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, it further includes: the electronic device uses the third frame sampling frequency to collect N3 from the third recording time period. Extracting Z frames from the frame shooting picture (this Z frame shooting picture is part of the M frame shooting picture extracted by the electronic device), the third frame sampling frequency is different from the above second frame sampling frequency and the first frame sampling frequency Similarly, the third recording time period does not overlap with the foregoing second recording time period and the first recording time period.
  • the method further includes: the electronic device displays each frame of the shooting picture being recorded in the preview picture in real time.
  • the electronic device when the electronic device displays each frame of the shooting frame being recorded in the preview screen in real time, it also includes: the electronic device displays the current recording duration in the preview screen in real time, and is related to the recording
  • the playback time of the time-lapse photography video corresponding to the duration allows the user to know the video length of the time-lapse photography video in real time.
  • the electronic device encodes the extracted M-frame shooting picture into a time-lapse photography video, including: the electronic device encodes the extracted M-frame shooting picture into this time-lapse photography according to a preset frame rate Time-lapse video.
  • the method further includes: in response to an operation of the user to open the time-lapse photography video, the electronic device uses a preset frame rate Play the time-lapse video.
  • the electronic device uses a preset frame rate Play the time-lapse video.
  • this application provides an electronic device, including: a touch screen, one or more processors, one or more memories, one or more cameras, and one or more computer programs; wherein the processor and the touch screen, the memory And the camera are both coupled, the above one or more computer programs are stored in the memory, when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so that the electronic device executes any of the above The recording method of time-lapse photography.
  • the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the time-lapse photography recording method described in any one of the first aspect.
  • the present application provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the recording method of time-lapse photography as described in any one of the first aspect.
  • the electronic equipment described in the second aspect, the computer storage medium described in the third aspect, and the computer program product described in the fourth aspect provided above are all used to execute the corresponding methods provided above.
  • the beneficial effects that can be achieved please refer to the beneficial effects in the corresponding method provided above, which will not be repeated here.
  • FIG. 1 is a first structural diagram of an electronic device according to an embodiment of the application
  • FIG. 2 is a schematic diagram of a photographing principle provided by an embodiment of the application.
  • FIG. 3 is a schematic flowchart of a recording method for time-lapse photography according to an embodiment of the application
  • FIG. 4 is a schematic diagram 1 of a scene of a recording method for time-lapse photography according to an embodiment of the application;
  • FIG. 5 is a second schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 6 is a third scene schematic diagram of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 7 is a fourth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 8 is a schematic diagram five of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 9 is a sixth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 10 is a seventh schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 11 is an eighth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 12 is a scene schematic diagram 9 of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 13 is a tenth schematic diagram of a scene of a recording method for time-lapse photography according to an embodiment of the application.
  • FIG. 14 is a schematic diagram eleventh scene of a recording method for time-lapse photography according to an embodiment of the application.
  • 15 is a schematic diagram twelfth of a scene of a recording method for time-lapse photography according to an embodiment of this application;
  • FIG. 16 is a second structural diagram of an electronic device provided by an embodiment of this application.
  • FIG. 17 is a third structural diagram of an electronic device provided by an embodiment of this application.
  • the recording method of time-lapse photography can be applied to mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, and personal digital computers.
  • Electronic devices such as personal digital assistants (PDAs), wearable electronic devices, virtual reality devices, etc., are not limited in the embodiments of the present application.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor through an I2C interface, so that the processor 110 and the touch sensor communicate through the I2C bus interface to realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely illustrative and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 can receive input from the battery 142 and/or the charging management module 140, and supply power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can be used to monitor performance parameters such as battery capacity, battery cycle times, battery charging voltage, battery discharging voltage, and battery health status (such as leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include one or more filters, switches, power amplifiers, low noise amplifiers (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating one or more communication processing modules.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the mobile phone 100 may include 1 or N cameras, and N is a positive integer greater than 1.
  • the camera 193 may be a front camera or a rear camera. As shown in FIG. 2, the camera 193 generally includes a lens and a sensor.
  • the photosensitive element may be a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor). ) And other arbitrary photosensitive devices.
  • the reflected light of the object being photographed can generate an optical image after passing through the lens.
  • the optical image is projected onto the photosensitive element, and the photosensitive element converts the received light signal into an electrical signal, and further,
  • the camera 193 sends the obtained electrical signal to a DSP (Digital Signal Processing) module for digital signal processing, and finally obtains a frame of digital image.
  • DSP Digital Signal Processing
  • the DSP can obtain a continuous multi-frame digital image according to the above-mentioned shooting principle, and the continuous multi-frame digital image can be encoded according to a certain frame rate to form a video.
  • one or more frames of digital images output by the DSP can be output on the mobile phone 100 through the display 194, or the digital images can be stored in the internal memory 121 (or the external memory 120), which is not done in the embodiment of the application. Any restrictions.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
  • the processor 110 can execute the above-mentioned instructions stored in the internal memory 121 to enable the electronic device 100 to execute the method for intelligently recommending contacts provided in some embodiments of the present application, as well as various functional applications and data processing.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
  • the data storage area can store data (such as photos, contacts, etc.) created during the use of the electronic device 101.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, universal flash storage (UFS), etc.
  • the processor 110 executes the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor to cause the electronic device 100 to execute the smart device provided in the embodiments of the present application. Recommended number method, as well as various functional applications and data processing.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with one or more microphones 170C.
  • the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals.
  • the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the sensor 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
  • a pressure sensor a gyroscope sensor
  • an air pressure sensor a magnetic sensor
  • an acceleration sensor a distance sensor
  • a proximity light sensor a fingerprint sensor
  • a temperature sensor a touch sensor
  • an ambient light sensor a bone conduction sensor
  • the electronic device 100 provided in the embodiment of the present application may further include one or more devices such as the button 190, the motor 191, the indicator 192, and the SIM card interface 195, which is not limited in the embodiment of the present application.
  • Time-lapse photography also called time-lapse photography or time-lapse video recording
  • time-lapse photography is a shooting technique that can compress time and reproduce the process of slowly changing scenes in a short time.
  • the electronic device 100 can start to record each frame of the shooting picture captured by the camera 193.
  • the electronic device 100 can extract M (M ⁇ N) frames from the N (N>1) frames captured by the camera 193 at a certain frame rate as the time-lapse photography video of this time-lapse photography.
  • the electronic device 100 can play the extracted M frames of shooting images at a certain frame rate, so as to reproduce the N frames of shooting images actually captured by the electronic device 100 through the M frames of shooting images Changes in the scene.
  • Optical flow is the instantaneous velocity of the pixel movement of a spatial moving object on an imaging plane (for example, a photographed picture).
  • the time interval is very small (for example, between two consecutive frames in the video)
  • the optical flow is also equivalent to the displacement of the target point.
  • the human eye observes a moving object
  • the scene of the object forms a series of continuously changing images on the retina of the human eye.
  • This series of continuously changing information constantly "flows through" the retina (i.e. imaging plane), like a kind of light. "Flow", so called optical flow.
  • Optical flow expresses the violent degree of image change, and it contains the motion information of objects between adjacent frames.
  • the position of point A in the t-th frame shooting picture is (x1, y1)
  • the position of point A in the t+1-th frame shooting picture is (x2, y2)
  • point A is in the t-th frame shooting picture
  • the optical flow intensity of each pixel between two adjacent frames can be calculated according to the above method. Then, based on the optical flow intensity of each pixel in the shooting image, the electronic device 100 can determine the optical flow intensity between the t-th frame shooting image and the t+1-th frame shooting image using a preset optical flow algorithm. Similarly, when the intensity of the optical flow is greater, it means that the shooting content in the t-th frame and the t+1-th frame has a greater change; when the intensity of the optical flow is smaller, it means that the t-th frame and the t-th frame have changed. In the +1 frame shooting screen, the change in the shooting content is small.
  • the electronic device 100 can use a dynamic frame sampling frequency to extract M (M ⁇ N) frames from the N frames captured by the camera 193 as the original Time-lapse video. For example, if the optical flow intensity of two adjacent frames of the captured images in the first 10 seconds is less than the preset value, it means that the degree of change in the captured images captured in the first 10 seconds is small. At this time, the electronic device 100 can use the smaller first The frame rate (e.g., one frame every 3 seconds) extracts the shots taken in the previous 10 seconds. If the optical flow intensities of the two adjacent frames in the 11-20 seconds are greater than the preset value, it means that the previously captured images in the 11-20 seconds have changed greatly.
  • M M ⁇ N
  • the electronic device 100 can be used
  • the larger second frame rate (for example, one frame every 0.5 second) extracts the shooting pictures taken within 11-20 seconds, and finally, the M frames of shooting pictures extracted by the electronic device 100 can form the video of this time-lapse photography .
  • the electronic device can select the corresponding frame extraction frequency according to the changes of the shooting content in the shooting screen during the recording process to extract frames to form a time-lapse photography video.
  • the change of the shooting content in the shooting picture is not obvious, it means that the motion speed of the shooting content is relatively slow at this time, and the electronic device can use a lower frame drawing frequency to extract the frame picture.
  • the change of the shooting content in the shooting screen is more obvious, it means that the shooting content is moving faster at this time.
  • the electronic device can use a higher frame sampling frequency to extract more frames, and more frames can be more accurate Reflect the changes in the details of the shooting content, so that the time-lapse photography video recorded by the electronic device can focus on the highlights that change quickly, thereby improving the photography effect of the time-lapse photography and the user experience.
  • the following will take a mobile phone as an example of the above-mentioned electronic device 100 to describe in detail a recording method of time-lapse photography provided in an embodiment of the present application. As shown in FIG. 3, the method includes steps S301-S306.
  • the mobile phone In response to the recording operation performed by the user in the time-lapse photography mode, the mobile phone starts to collect each frame of the shooting picture captured by the camera.
  • the camera application of the mobile phone has the function option of time-lapse photography.
  • the preview interface 401 displayed by the mobile phone is set with a function option 402 of time-lapse photography. If it is detected that the user selects the function option 402, the mobile phone can enter the time-lapse photography mode of the camera application.
  • one or more shooting modes such as photo mode, portrait mode, panorama mode, video mode, or slow motion mode can also be set in the preview interface 401, and the embodiment of the present application does not impose any limitation on this.
  • the preview interface 401 can display the shooting image 403 currently captured by the camera. Since the time-lapse photography video has not yet been recorded at this time, the shooting screen 403 displayed in real time in the preview interface 401 may be referred to as a preview screen.
  • the preview interface 401 also includes a recording button 404 for time-lapse photography. If it is detected that the user clicks the record button 404 in the preview interface 401, it means that the user has performed the recording operation in the time-lapse photography mode. At this time, the mobile phone can continue to use the camera to capture each frame of the camera captured by the camera and start recording the delay Photography video.
  • the mobile phone can collect each frame of the shooting image at a certain collection frequency. Taking the acquisition frequency of 30 frames per second for example, after the user clicks the record button 404 in the preview interface 401, the phone can capture 30 frames of shooting within 0-1 seconds, and 30 frames of shooting within 1-2 seconds Picture. As the recording time elapses, the number of frames of the shooting pictures collected by the mobile phone will gradually accumulate. The multi-frame shooting pictures extracted by the mobile phone from these collected shooting pictures according to the frame rate can finally form the time-lapse photography. Time-lapse video.
  • the mobile phone can copy the video stream formed by each frame of the captured picture into a dual-channel video stream.
  • Each video stream includes every frame captured in real time after the mobile phone turns on the time-lapse recording function.
  • the mobile phone can use one of the video streams to execute the following step S302 to complete the display task of the preview interface during the time-lapse photography.
  • the mobile phone can use another video stream to perform the following steps S303-S305 to complete the time-lapse video production task.
  • the mobile phone displays each frame of the captured shooting picture in the preview interface in real time.
  • step S302 after the mobile phone starts to record the time-lapse photography video, as shown in FIG. 5, the mobile phone can display in the preview interface 401 in real time each frame of the shooting picture 501 currently collected by the camera.
  • the mobile phone may display a shooting prompt 502 in the preview interface 401 to remind the user that the user is currently in the recording state.
  • the mobile phone can also load corresponding animation special effects on the recording button 404 to remind the user that the user is currently in the recording state.
  • the mobile phone may also display the currently recorded duration 503 in the preview interface 401.
  • the recorded time 503 may reflect the current recording time.
  • the mobile phone can also display the duration 504 of the time-lapse video corresponding to the currently recorded duration 503 in the preview interface 401. For example, if the mobile phone plays the final time-lapse photography video at a frame rate of 30 frames per second, when the duration 504 of the time-lapse photography video is 1 second, 30 frames of shooting pictures need to be extracted from the shooting pictures collected by the mobile phone. If the frame rate of the mobile phone extracting the shooting picture is one frame per second, the mobile phone can extract 30 frames of the shooting picture every 30 seconds.
  • the duration 504 of the corresponding time-lapse photography video is 1 second. In this way, through time-lapse photography, a video recorded in a longer period of time can be compressed into a shorter time-lapse photography video, thereby reproducing the process of slowly changing scenes in a short period of time.
  • S303 The mobile phone determines the optical flow intensity of two adjacent frames of the captured N frames of captured images.
  • step S303 as shown in FIG. 6, after the mobile phone activates the recording function of time-lapse photography, N frames of shooting pictures can be collected at a certain collection frequency, where N dynamically changes with the length of the recording time.
  • the preset optical flow algorithm can be used to calculate the optical flow intensity Q between the first frame and the second frame (1) .
  • the greater the optical flow intensity Q(1) the greater the degree of change in the image between the first frame and the second frame, that is, the faster the movement speed of the scene in the first frame and the second frame. fast.
  • the mobile phone can continue to calculate the optical flow intensity Q(2) between the second frame and the third frame; after the mobile phone obtains the fourth frame, the mobile phone It can continue to calculate the optical flow intensity Q(3) between the shooting screen of the third frame and the shooting screen of the fourth frame,..., after the mobile phone obtains the shooting screen of the Nth frame, the mobile phone can continue to calculate the The optical flow intensity Q(N-1) between the shots of the Nth frame.
  • the optical flow intensity Q calculated by the mobile phone each time can reflect the degree of change of the two adjacent frames. Furthermore, as shown in Fig. 7, the mobile phone can determine the change of the optical flow intensity Q with the recording time during the recording of this time-lapse video according to the optical flow intensity Q between two adjacent frames of the shooting pictures obtained each time. Situation, that is, the change curve of optical flow intensity Q-recording time (referred to as optical flow curve in subsequent embodiments).
  • the optical flow curve can reflect the changes of the shooting picture during the recording process. It should be noted that since the length of the recording time is generally manually operated by the user, the optical flow curve can dynamically change with the length of the recording time.
  • the shooting content in the shots before and after sunrise generally changes slowly. Therefore, it is the same as before and after sunrise.
  • the value of the optical flow intensity Q corresponding to a period of time is relatively small. If the mobile phone extracts the pictures taken at a higher frame rate during the two periods of time before and after the sunrise, the resulting time-lapse photography video will have more pictures before and after sunrise. There is no significant change between shots.
  • the mobile phone can dynamically set the frame drawing frequency of each recording period according to the above optical flow curve by executing the following step S304, so as to improve the photographic effect of time-lapse photography.
  • the mobile phone can detect the magnitude of the optical flow intensity Q in the optical flow curve formed in step S303 in real time. Still taking the optical flow curve shown in Figure 7 as an example, if it is detected that the optical flow intensity Q corresponding to the optical flow curve from the first second to the tenth second is less than the first threshold, it means that the first second to the tenth second The shot scene has not changed drastically.
  • the mobile phone can set the frame rate from the 1st to the 10th second to the lower first frame rate. For example, the first frame rate is 1 frame every 2 seconds. Take a picture. Then, as shown in FIG. 8, the mobile phone can extract 5 frames of shooting images at the first frame rate of 1 frame every 2 seconds from the N1 frames of shooting images collected from the 1st to 10th seconds.
  • the mobile phone can set the frame rate from the 10th second to the 15th second to the second frame rate with a higher value.
  • the second frame rate is every Take 1 frame of shooting in 1 second.
  • the mobile phone can extract 5 frames of shooting images at the second sampling frequency of 1 frame every 1 second from the N2 frames of shooting images collected from the 10th to 15th seconds.
  • the mobile phone can continue to dynamically adjust the frame rate of different recording time periods according to the optical flow curve after the 15th second.
  • the embodiment of this application does not do anything about this. limit.
  • the mobile phone when it is detected that the change in the scene in the shooting picture is not obvious, can use a lower first frame extraction frequency to extract frames from the shooting picture collected by the mobile phone; when the change in the scene in the shooting picture is detected When it is more obvious, the mobile phone can use a higher second frame extraction frequency to extract frames from the shooting images collected by the mobile phone. In this way, when the captured scene changes significantly, the mobile phone can extract a larger number of shots through a higher second frame rate, and these more shots can more accurately reflect the changes in the details of the shot content. Thus, in the final recorded time-lapse photography video, the dynamic process when the scene changes significantly is highlighted.
  • the mobile phone can also dynamically adjust the frame drawing frequency in different recording time periods in units of a preset time window. As shown in (a) in Figure 10, the mobile phone can preset a length of 5 seconds The unit time window is 1001. The mobile phone can use the time window 1001 from the starting position of the optical flow curve to calculate the average value of the optical flow intensity Q in the time window 1001, that is, the average value of the optical flow intensity Q from the first second to the fifth second.
  • the mobile phone can extract frames according to the higher second frame sampling frequency for the shooting images collected in the time window 1001; if the calculated average value is less than the threshold value, the mobile phone can extract frames according to the The low first frame sampling frequency performs frame sampling on the captured images collected in the time window 1001. Furthermore, as shown in (b) in Figure 10, the mobile phone can move the time window 1001 to the next 5 seconds (that is, the 5th to the 10th second) of the optical flow curve, and the mobile phone can repeat the calculation The average value of the optical flow intensity Q in the time window 1001 after each time window 1001 moves. In this way, the mobile phone can dynamically adjust the frame sampling frequency in different recording time periods when recording the time-lapse video in the unit of 5 seconds.
  • the size of the above-mentioned time window may be fixed or may be dynamically changed, and the embodiment of the present application does not impose any limitation on this.
  • the method for dynamically adjusting the frame rate is explained by taking the setting of two frame sampling frequencies (ie, the first frame rate and the second frame rate) for dynamic adjustment of the mobile phone as an example. It is understandable that the mobile phone can also set three or more frame draw frequencies for the mobile phone to dynamically adjust.
  • the mobile phone can preset the first threshold and the second threshold of the optical flow intensity (the second threshold is greater than the first threshold).
  • the mobile phone can extract the X frames of the shooting pictures at the time 0-T1 according to the first frame rate; when the T1 in the optical flow curve is detected -When the optical flow intensity Q at time T2 is greater than the first threshold and less than the second threshold, the mobile phone can extract the Y frames at time T1-T2 according to the second frame rate (the second frame rate is greater than the first frame rate) Picture; when it is detected that the optical flow intensity Q after T2 in the optical flow curve is greater than the second threshold, the mobile phone can extract Z after T2 at the third frame rate (the third frame rate is greater than the second frame rate) Frame the picture.
  • the mobile phone may also provide the user with the function of manually setting the frame rate for different recording periods.
  • the user can manually enter the frame rate for different recording periods in the camera's setting interface.
  • the user can set the recording time of the entire time-lapse photography to 30 minutes. Within these 30 minutes, the frame rate set by the user in the first 10 minutes and the last 10 minutes is 1 frame per second. The frame rate set by the user in minutes is 1 frame taken every 0.5 seconds.
  • the mobile phone can dynamically adjust the corresponding frame sampling frequency in different recording time periods according to the frame sampling frequency set by the user, and the embodiment of the present application does not impose any limitation on this.
  • the mobile phone In response to the recording stop operation performed by the user in the time-lapse photography mode, the mobile phone encodes the extracted M-frame shooting picture into a time-lapse photography video, and the M-frame shooting picture includes the foregoing X-frame shooting picture and Y-frame shooting picture.
  • the user can click the record button 404 in the preview interface 401 again.
  • the mobile phone may encode the M frames of shooting pictures extracted at this time into this time-lapse photography video in chronological order.
  • the extracted M frames of shooting pictures include the respective shooting pictures extracted by the mobile phone according to the corresponding frame sampling frequency in different recording time periods.
  • the mobile phone may encode the 300 frames of shooting images according to the preset frame rate. Taking the preset frame rate of 30 frames per second as an example, the mobile phone can encode the extracted 300 frames of shooting images into a time-lapse video of 10 seconds at a frame rate of 30 frames per second.
  • a method for dynamically adjusting the frame rate during time-lapse photography is provided by the mobile phone by calculating the optical flow intensity of the captured image as an example.
  • the correspondence between different shooting scenes and different frame sampling frequencies may be stored in the mobile phone in advance. For example, when the shooting scene is a scene where the scene changes slowly, such as a sky scene, the corresponding frame extraction frequency is one frame every 2 seconds; when the shooting scene is a scene where the scene changes quickly, such as a bird scene, the corresponding frame extraction frequency Take one frame every 0.5 seconds.
  • the mobile phone can recognize in real time the shooting scene being shot in the captured shooting screen.
  • the mobile phone can recognize the shooting scene in the shooting screen through image analysis or AI (artificial intelligence) recognition algorithms.
  • AI artificial intelligence
  • the mobile phone can extract one frame every 2 seconds from the shooting screen captured from the 1st to the 10th second
  • the first frame frequency of the shooting picture is to extract the shooting picture.
  • the mobile phone can extract the second shot of the shooting frame every 0.5 seconds from the shooting images collected from the 11th to the 14th second.
  • the frame rate extracts the captured image.
  • the mobile phone can encode the extracted shooting picture into this time-lapse video.
  • the mobile phone when the mobile phone records time-lapse photography video, it can use different frame extraction frequencies in different shooting scenes to extract the shooting images of the time-lapse photography video from the shooting images recorded by the mobile phone, so that the final time-lapse photography is formed Video can accurately present the dynamic process of scene changes in different shooting scenes, improving the shooting effect of time-lapse video and the user experience.
  • the mobile phone after the mobile phone starts to record the time-lapse photography video, the mobile phone can recognize the shooting target in the captured shooting frame in real time and the movement speed of the shooting target. As shown in Figure 14, if the mobile phone recognizes that the shooting target of the picture taken from the first second to the fifth second is the person 1401, and the movement speed of the person 1401 is less than the preset value, then the mobile phone can be used in the first second to the fifth second Among the captured shooting images, the shooting images are extracted at the first frame rate at which one frame of the shooting images is extracted every 2 seconds.
  • the mobile phone can capture the shooting frame from the 5th to the 10th second.
  • the captured image is extracted at the second frame rate at which one frame of the captured image is extracted every 0.5 seconds.
  • the mobile phone can encode the extracted shooting picture into this time-lapse video.
  • the foregoing shooting target may be manually selected by the user.
  • the user may select the shooting target in the shooting screen displayed on the preview interface by manual focusing.
  • the aforementioned shooting target may also be automatically recognized by the mobile phone through image analysis or AI recognition and other algorithms, and the embodiment of the present application does not impose any limitation on this.
  • a time-lapse photography video of a sunrise process is taken as an example. It is understandable that the mobile phone can also use the above-mentioned time-lapse photography recording method to record the sunset process, the blooming process, or the flower blooming process.
  • the embodiment of this application does not impose any limitation on this.
  • the phone can determine the optical flow intensity Q between the two adjacent frames of the shooting picture collected each time.
  • the variation range of the shooting content in the shooting picture will not be too large.
  • the optical flow intensity Q between two adjacent shooting pictures is generally small.
  • the variation range of the shooting content in the shooting images will increase.
  • the optical flow intensity Q between two adjacent shooting images is generally larger.
  • the mobile phone can follow the smaller value of the first
  • the frame rate is used to extract X frames of shooting pictures; if it is detected within T2 that the optical flow intensity Q between two adjacent frames of shooting pictures is greater than or equal to the first threshold, the scene changes in the shooting picture are more obvious, then the mobile phone can follow The second sampling frequency with a larger value extracts Y frames of the shooting picture. In this way, the finally formed time-lapse photography video can accurately present the dynamic process of flower bud opening during the blooming process, thereby improving the shooting effect of the time-lapse photography video and the user experience.
  • the mobile phone can not only adjust the frame frequency to extract the captured images according to the optical flow intensity Q between two adjacent frames of shooting images in real time, but also according to other changes that can reflect the changes in the shooting content.
  • the amplitude parameter adjusts the frame rate.
  • the mobile phone can use flowers in the shooting screen as the shooting target to detect the speed of the shooting target.
  • the mobile phone can adjust the frame sampling frequency in real time according to the motion speed of the shooting target to extract the shooting pictures of this time-lapse photography video, which is not limited in the embodiment of the present application.
  • the mobile phone when the mobile phone is recording a time-lapse photography video, if it detects that the movement speed of the shooting target is slow, the mobile phone can use a slower frame extraction frequency to extract the shooting pictures in the time-lapse video from the shooting pictures recorded by the mobile phone. ; If the fast moving speed of the shooting target is detected, the mobile phone can use a higher frame extraction frequency to extract the shooting pictures in the time-lapse photography video from the shooting pictures recorded by the mobile phone, so as to present the fast moving parts of the shooting target. Time-lapse video.
  • the mobile phone can save each frame of the shooting picture collected by the camera during the process of recording the time-lapse video without performing the above steps S303-S305.
  • the mobile phone can dynamically adjust the frame drawing frequency of different recording time periods by performing the above steps S303-S304, and extract each shooting picture according to the corresponding frame drawing frequency during different recording time periods to form This time-lapse photography video does not impose any restriction on this embodiment of the application.
  • the mobile phone plays the aforementioned time-lapse photography video at a preset frame rate.
  • the mobile phone can store the time-lapse photography video obtained in step S305 in the photo album application (also referred to as a gallery application) of the mobile phone.
  • the photo album application also referred to as a gallery application
  • the mobile phone can encode the time-lapse video 1501 at the frame rate (For example, 30 frames/second) play the time-lapse video 1501.
  • the shooting pictures are extracted into the time-lapse video 1501, so the time-lapse video 1501 is displayed There are more shots of this scene change.
  • the frame rate is increased from one frame every 1 second to one frame every 0.5 second.
  • the phone finally picked up the rising sun
  • the extracted shooting images can more vividly and accurately reflect the changes in details when the sun rises, so that the time-lapse photography video 1501 can focus on the dynamic process when the scene changes significantly during playback. , Thereby improving the recording effect of time-lapse photography and the user experience.
  • an embodiment of the present application discloses an electronic device, which can be used to implement the methods described in each of the above method embodiments.
  • the electronic device may specifically include: an acquisition module 1601, a preview module 1602, an optical flow analysis module 1603, a frame extraction module 1604, an encoding module 1605, and a playback module 1606.
  • the collection module 1601 is used to support the electronic device to perform the process S301 in FIG. 3; the preview module 1602 is used to support the electronic device to perform the process S302 in FIG. 3; the optical flow analysis module 1603 is used to support the electronic device to perform the process S303 in FIG. 3;
  • the frame extraction module 1604 is used to support the electronic device to perform the processes S304a, S304b, ... in FIG. 3;
  • the encoding module 1605 is used to support the electronic device to perform the process S305 in FIG. 3;
  • the playback module 1606 is used to support the electronic device to perform the processes in FIG.
  • the process S306. all relevant content of each step involved in the above method embodiment can be cited in the function description of the corresponding function module, and will not be repeated here.
  • an embodiment of the present application discloses an electronic device, including: a touch screen 1701, the touch screen 1701 includes a touch-sensitive surface 1706 and a display screen 1707; one or more processors 1702; memory 1703; one or more One camera 1708; and one or more computer programs 1704.
  • the aforementioned devices may be connected via one or more communication buses 1705.
  • the aforementioned one or more computer programs 1704 are stored in the aforementioned memory 1703 and are configured to be executed by the one or more processors 1702, and the one or more computer programs 1704 include instructions, and the aforementioned instructions can be used to execute the aforementioned Each step in the embodiment should be implemented.
  • the foregoing processor 1702 may specifically be the processor 110 shown in FIG. 1
  • the foregoing memory 1703 may specifically be the internal memory 121 and/or the external memory 120 shown in FIG. 1
  • the foregoing display screen 1707 may specifically be FIG.
  • the aforementioned camera 1708 may specifically be the camera 193 shown in FIG. 1
  • the aforementioned touch-sensitive surface 1706 may specifically be a touch sensor in the sensor module 180 shown in FIG. 1, which is not done in this embodiment of the application. Any restrictions.
  • the functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • a computer readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

La présente invention concerne le domaine technique des terminaux, et concerne un procédé d'enregistrement pour la photographie à intervalle, et un dispositif électronique. Une vidéo de photographie à intervalle peut être enregistrée à la fréquence d'extraction de trame qui change de façon dynamique de façon à présenter emphatiquement la mise en surbrillance ayant un degré de changement d'image élevé dans une photographie à intervalle et à améliorer l'expérience d'utilisateur. Le procédé comprend les étapes suivantes : un dispositif électronique affiche une interface de prévisualisation dans un mode de photographie à intervalle ; en réponse à une opération d'enregistrement exécutée par un utilisateur, le dispositif électronique commence à enregistrer des trames d'images photographiques capturées par un appareil photo ; le dispositif électronique extrait X trames d'images photographiques à une première fréquence d'extraction de trame à partir de N1 trames d'images photographiques acquises pendant une première période de temps d'enregistrement ; le dispositif électronique extrait Y trames d'images photographiques à une seconde fréquence d'extraction de trame à partir de N2 trames d'images photographiques acquises pendant une seconde période d'enregistrement, la seconde fréquence d'extraction de trame étant différente de la première fréquence d'extraction de trame ; en réponse à une opération d'arrêt d'enregistrement exécutée par l'utilisateur, le dispositif électronique code M trames d'images photographiques extraites en une vidéo de photographie à intervalle.
PCT/CN2020/079402 2019-03-25 2020-03-14 Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique WO2020192461A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910229645.5A CN110086985B (zh) 2019-03-25 2019-03-25 一种延时摄影的录制方法及电子设备
CN201910229645.5 2019-03-25

Publications (1)

Publication Number Publication Date
WO2020192461A1 true WO2020192461A1 (fr) 2020-10-01

Family

ID=67413619

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079402 WO2020192461A1 (fr) 2019-03-25 2020-03-14 Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique

Country Status (2)

Country Link
CN (1) CN110086985B (fr)
WO (1) WO2020192461A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643728A (zh) * 2021-08-12 2021-11-12 荣耀终端有限公司 一种音频录制方法、电子设备、介质及程序产品
CN114390236A (zh) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 视频处理方法、装置、计算机设备和存储介质
CN114615421A (zh) * 2020-12-07 2022-06-10 华为技术有限公司 图像处理方法及电子设备
CN117714850A (zh) * 2023-08-31 2024-03-15 荣耀终端有限公司 延时摄影方法及其相关设备

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110086985B (zh) * 2019-03-25 2021-03-30 华为技术有限公司 一种延时摄影的录制方法及电子设备
CN112532859B (zh) * 2019-09-18 2022-05-31 华为技术有限公司 视频采集方法和电子设备
CN112532857B (zh) * 2019-09-18 2022-04-12 华为技术有限公司 一种延时摄影的拍摄方法及设备
CN112887583B (zh) 2019-11-30 2022-07-22 华为技术有限公司 一种拍摄方法及电子设备
CN111294509A (zh) * 2020-01-22 2020-06-16 Oppo广东移动通信有限公司 视频拍摄方法、装置、终端及存储介质
CN113225490B (zh) * 2020-02-04 2024-03-26 Oppo广东移动通信有限公司 延时摄影方法及其摄影装置
CN111240184B (zh) * 2020-02-21 2021-12-31 华为技术有限公司 钟表误差的确定方法、终端、计算机存储介质
CN111526281B (zh) * 2020-03-25 2021-06-25 东莞市至品创造数码科技有限公司 一种计算延时摄影影像时长的方法及装置
CN111464760A (zh) * 2020-05-06 2020-07-28 Oppo(重庆)智能科技有限公司 动态图像的生成方法、生成装置及终端设备
CN113747049B (zh) * 2020-05-30 2023-01-13 华为技术有限公司 一种延时摄影的拍摄方法及设备
WO2022021128A1 (fr) * 2020-07-29 2022-02-03 深圳市大疆创新科技有限公司 Procédé de traitement d'image, dispositif électronique, caméra et support de stockage lisible
CN117857723A (zh) * 2020-12-21 2024-04-09 维沃移动通信有限公司 画面处理方法及装置
CN112702607B (zh) * 2020-12-25 2022-11-22 深圳大学 一种基于光流决策的智能视频压缩方法及装置
CN114827443A (zh) * 2021-01-29 2022-07-29 深圳市万普拉斯科技有限公司 视频帧选取方法、视频延时处理方法、装置及计算机设备
CN113726949B (zh) * 2021-05-31 2022-08-26 荣耀终端有限公司 一种视频处理方法、电子设备及存储介质
CN113810596B (zh) * 2021-07-27 2023-01-31 荣耀终端有限公司 延时摄影方法和装置
CN113691721B (zh) * 2021-07-28 2023-07-18 浙江大华技术股份有限公司 一种缩时摄影视频的合成方法、装置、计算机设备和介质
CN115776532B (zh) * 2021-09-07 2023-10-20 荣耀终端有限公司 一种录像中抓拍图像的方法及电子设备
CN113556473B (zh) * 2021-09-23 2022-02-08 深圳市天和荣科技有限公司 花朵开花过程的拍摄方法、装置、电子设备及存储介质
CN114679607B (zh) * 2022-03-22 2024-03-05 深圳云天励飞技术股份有限公司 一种视频帧率控制方法、装置、电子设备及存储介质
CN114827477B (zh) * 2022-05-26 2024-03-29 维沃移动通信有限公司 延时摄影的方法、装置、电子设备及介质
CN116708751B (zh) * 2022-09-30 2024-02-27 荣耀终端有限公司 一种拍照时长的确定方法、装置和电子设备
CN115988262A (zh) * 2022-12-14 2023-04-18 北京有竹居网络技术有限公司 用于视频处理的方法、装置、设备和介质
CN117714899A (zh) * 2023-07-28 2024-03-15 荣耀终端有限公司 延时摄影的拍摄方法和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959539A (zh) * 2016-05-09 2016-09-21 南京云恩通讯科技有限公司 一种自动确定延时速度的延时摄影方法
US20160344927A1 (en) * 2015-05-21 2016-11-24 Apple Inc. Time Lapse User Interface Enhancements
CN107396019A (zh) * 2017-08-11 2017-11-24 维沃移动通信有限公司 一种慢动作视频录制方法及移动终端
CN108293123A (zh) * 2015-12-22 2018-07-17 三星电子株式会社 用于产生缩时图像的方法和设备
CN109068052A (zh) * 2018-07-24 2018-12-21 努比亚技术有限公司 视频拍摄方法、移动终端及计算机可读存储介质
CN110086985A (zh) * 2019-03-25 2019-08-02 华为技术有限公司 一种延时摄影的录制方法及电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9247098B2 (en) * 2013-04-09 2016-01-26 Here Global B.V. Automatic time lapse capture
CN104539864B (zh) * 2014-12-23 2018-02-02 小米科技有限责任公司 记录图像的方法和装置
US10187607B1 (en) * 2017-04-04 2019-01-22 Gopro, Inc. Systems and methods for using a variable capture frame rate for video capture
CN107197162B (zh) * 2017-07-07 2020-11-13 盯盯拍(深圳)技术股份有限公司 拍摄方法、拍摄装置、视频存储设备以及拍摄终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160344927A1 (en) * 2015-05-21 2016-11-24 Apple Inc. Time Lapse User Interface Enhancements
CN108293123A (zh) * 2015-12-22 2018-07-17 三星电子株式会社 用于产生缩时图像的方法和设备
CN105959539A (zh) * 2016-05-09 2016-09-21 南京云恩通讯科技有限公司 一种自动确定延时速度的延时摄影方法
CN107396019A (zh) * 2017-08-11 2017-11-24 维沃移动通信有限公司 一种慢动作视频录制方法及移动终端
CN109068052A (zh) * 2018-07-24 2018-12-21 努比亚技术有限公司 视频拍摄方法、移动终端及计算机可读存储介质
CN110086985A (zh) * 2019-03-25 2019-08-02 华为技术有限公司 一种延时摄影的录制方法及电子设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615421A (zh) * 2020-12-07 2022-06-10 华为技术有限公司 图像处理方法及电子设备
CN114615421B (zh) * 2020-12-07 2023-06-30 华为技术有限公司 图像处理方法及电子设备
CN113643728A (zh) * 2021-08-12 2021-11-12 荣耀终端有限公司 一种音频录制方法、电子设备、介质及程序产品
CN113643728B (zh) * 2021-08-12 2023-08-22 荣耀终端有限公司 一种音频录制方法、电子设备、介质及程序产品
CN114390236A (zh) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 视频处理方法、装置、计算机设备和存储介质
CN117714850A (zh) * 2023-08-31 2024-03-15 荣耀终端有限公司 延时摄影方法及其相关设备

Also Published As

Publication number Publication date
CN110086985A (zh) 2019-08-02
CN110086985B (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
WO2020192461A1 (fr) Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique
WO2020186969A1 (fr) Procédé et dispositif d'enregistrement de vidéo multi-pistes
WO2021052232A1 (fr) Procédé et dispositif de photographie à intervalle de temps
WO2021052292A1 (fr) Procédé d'acquisition de vidéo et dispositif électronique
WO2021031915A1 (fr) Procédé et appareil d'enregistrement vidéo intelligents
US11889180B2 (en) Photographing method and electronic device
WO2020078237A1 (fr) Procédé de traitement audio et dispositif électronique
WO2020238741A1 (fr) Procédé de traitement d'image, dispositif associé et support de stockage informatique
CN110381276B (zh) 一种视频拍摄方法及电子设备
CN112954251B (zh) 视频处理方法、视频处理装置、存储介质与电子设备
WO2023077939A1 (fr) Procédé et appareil de commutation de caméra, dispositif électronique et support de stockage
WO2023284591A1 (fr) Procédé et appareil de capture vidéo, dispositif électronique et support de stockage
CN115086567A (zh) 延时摄影方法和装置
CN113593567B (zh) 视频声音转文本的方法及相关设备
CN112037157B (zh) 数据处理方法及装置、计算机可读介质及电子设备
CN112188094B (zh) 图像处理方法及装置、计算机可读介质及终端设备
CN115412678B (zh) 曝光处理方法、装置及电子设备
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
CN115297269B (zh) 曝光参数的确定方法及电子设备
WO2024193523A1 (fr) Procédé de traitement d'image basé sur une collaboration d'extrémité à nuage et appareil associé
CN117714876B (zh) 图像显示方法、存储介质、电子设备及芯片
US20240365008A1 (en) Multi-channel video recording method and device
RU2780808C1 (ru) Способ фотографирования и электронное устройство
RU2789447C1 (ru) Способ и устройство многоканальной видеозаписи
WO2022218216A1 (fr) Procédé de traitement d'images et dispositif terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20777822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20777822

Country of ref document: EP

Kind code of ref document: A1