CN113067994A - Video recording method and electronic equipment - Google Patents

Video recording method and electronic equipment Download PDF

Info

Publication number
CN113067994A
CN113067994A CN202110349654.5A CN202110349654A CN113067994A CN 113067994 A CN113067994 A CN 113067994A CN 202110349654 A CN202110349654 A CN 202110349654A CN 113067994 A CN113067994 A CN 113067994A
Authority
CN
China
Prior art keywords
video
image
frame
frame rate
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110349654.5A
Other languages
Chinese (zh)
Other versions
CN113067994B (en
Inventor
卢晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110349654.5A priority Critical patent/CN113067994B/en
Publication of CN113067994A publication Critical patent/CN113067994A/en
Application granted granted Critical
Publication of CN113067994B publication Critical patent/CN113067994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback

Abstract

The application discloses a video recording method and electronic equipment, wherein in a video acquisition event, video images are acquired and at least part of the video images are stored as video frame images of a first video determined based on a first frame rate, the video frame images of a second video determined based on a second frame rate (different from the first frame rate) are obtained based on starting point calibration information until ending point calibration information is obtained, an output video comprising the video frame images of the first video and the second video is generated, and the motion effect of a first video output object is different from that of a second video output object in the playing process of the output video. Therefore, the video effect obtained by recording in the scheme is not single any more, and the appreciation is improved. And compared with the mode that video images are acquired all the way at a high frame rate in one video acquisition event to realize a slow motion effect, the method reduces the bandwidth of the device and/or the occupation amount of the storage space.

Description

Video recording method and electronic equipment
Technical Field
The application belongs to the field of multimedia information acquisition and processing, and particularly relates to a video recording method and electronic equipment.
Background
With the popularization of short video applications, the video function use requirements of electronic devices such as mobile phones and the like are greatly increased. Slow motion photography is also known as raise-shoot as a special shooting technique. The required slow motion effect can be achieved through slow motion photography, however, the existing slow motion photography technology has the defects of high bandwidth and storage space occupation, single generated slow motion video effect, low appreciation and the like.
Disclosure of Invention
Therefore, the application discloses a video recording method and electronic equipment to solve at least part of technical problems in the prior art.
The specific scheme is as follows:
a video recording method, comprising:
acquiring video images through an image acquisition device in response to the obtained video recording starting instruction, and storing at least part of the acquired video images as video frame images of a first video determined based on a first frame rate;
in the process of acquiring the video image through the image acquisition device, displaying the video image through a display screen;
in the process of acquiring the video image through the image acquisition device, if the starting point calibration information is obtained, obtaining a video frame image serving as a second video based on the starting point calibration information until the end point calibration information is obtained, wherein the second video is determined based on a second frame rate; wherein the first frame rate is different from the second frame rate;
ending video recording in response to the obtained instruction for ending video recording, and obtaining an output video generated based on the first video and the second video, wherein the output video comprises a video frame image of the first video and a video frame image of the second video,
and in the playing process of the output video, the motion effect of the video frame image output object of the first video is different from the motion effect of the video frame image output object of the second video.
Optionally, the obtaining the video frame image as the second video based on the starting point calibration information until obtaining the end point calibration information includes:
switching from a first image capture device to the second image capture device; the first image acquisition device is an image acquisition device for acquiring video images at the first frame rate;
capturing video images at a second frame rate by a second image capturing device and storing the video images as video frame images of a second video until end point calibration information is obtained.
Optionally, the method further includes:
and switching from the second image acquisition device to the first image acquisition device based on the end point calibration information until the start point calibration information is obtained again or the end video recording instruction is obtained.
Optionally, the acquiring, by the image acquisition device, a video image and storing at least part of the acquired video image as a video frame image of the first video determined based on the first frame rate includes:
acquiring a video image at the first frame rate through an image acquisition device;
all video images collected at a first frame rate are used as video frame images of a first video and stored;
the obtaining of the video frame image as the second video based on the start point calibration information until obtaining the end point calibration information includes:
acquiring the moment corresponding to the initial point calibration information of the image acquisition device as an initial point, and taking the video image acquired at the first frame rate as a video frame image of a second video until the end point calibration information is acquired;
wherein the second video is: and performing frame interpolation on the video frames acquired by the image acquisition device based on the starting point calibration information and the end point calibration information at a first frame rate to form a second video frame rate after frame interpolation, wherein the second frame rate is the second frame rate.
Optionally, the acquiring, by an image acquisition device, a video image and storing the video image as a video frame image of a first video determined based on a first frame rate includes:
acquiring a video image at the second frame rate through an image acquisition device;
performing frame loss processing on the video image acquired at the second frame rate, and taking and storing the residual frame image after frame loss as the video frame image of the first video;
the obtaining of the video frame image as the second video based on the start point calibration information until obtaining the end point calibration information includes:
and obtaining the image acquisition device by taking the moment corresponding to the initial point calibration information as an initial point and taking the video image acquired at the second frame rate as a video frame image of the second video until the end point calibration information is obtained.
Optionally, the obtaining of the starting point calibration information includes:
detecting first target operation information;
or, the target behavior of the target subject object appearing in the video image or the subject object in the current video image is detected.
Optionally, the obtaining end point calibration information includes:
detecting second target operation information;
or, it is detected that the target subject object disappears from the video image or that the behavior of the subject object in the video image cuts away from the target behavior.
Optionally, generating an output video based on the first video and the second video includes:
in the recording process, sequentially inputting each video frame image of the first video and each video frame image of the second video into a video compression and encoding unit according to the time sequence;
and the video compression and coding unit sequentially performs compression coding processing on each received video frame image of the first video and each received video frame image of the second video according to a first frame rate to obtain an output video.
Optionally, before sequentially inputting each video frame image of the second video to the video compression and encoding unit in time sequence, the method further includes:
performing picture alignment on at least a portion of the video frame images of the second video based on the video frame images of the first video.
An electronic device, comprising:
a display screen;
an image acquisition device;
a memory for storing at least one set of instructions;
a processor for calling and executing the instruction set in the memory, and implementing the video recording method according to any one of the above items by executing the instruction set.
According to the scheme, in one video acquisition event, the video images are acquired and at least part of the video images are stored as video frame images of a first video determined based on a first frame rate, the video frame images of a second video determined based on a second frame rate (different from the first frame rate) are obtained based on starting point calibration information until ending point calibration information is obtained, an output video comprising the video frame images of the first video and the second video is generated, and the motion effect of a first video output object is different from that of a second video output object in the playing process of the output video. Therefore, the video with different playing effects is output through one-time video acquisition process, the obtained video effect is not single any more, and the appreciation is improved.
And compared with the mode that video frame images are acquired all the way at a high frame rate in one video acquisition event to realize the slow motion effect, the bandwidth and/or the occupation amount of the storage space of the device are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video recording method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a video recording process implemented based on dual-shooting according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of generating an output video according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a comparison between a playing time of an output video and an actually elapsed time of a recording object according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of generating an output video according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a video recording process implemented based on a single shot according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another process for implementing video recording based on single shot according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The principle of slow motion video shooting is that video image acquisition is performed based on a high frame rate (e.g. 120fps), and video image output playing is performed at a low frame rate (usually, a normal playing frame rate, e.g. 30fps) during playing, so as to achieve a slow motion effect, and the applicant finds that image acquisition devices (e.g. cameras) of mobile phones, tablet computers and other devices cannot change the frame rate online, and if the frame rate needs to be changed, the image acquisition devices are powered off and powered on, which can cause video cutoff and cannot maintain seamless connection of video streams, and accordingly, the existing slow motion shooting can only acquire video images at the high frame rate in the whole process, and accordingly, one video acquisition event can only generate videos at a single frame rate, which causes at least the following defects:
1) in video recording with a slow motion effect, the frame rate in the whole recording process is increased, so that the requirements of bandwidth and storage space are greatly increased;
2) the generated slow motion video has single effect and low appreciation, and is embodied as follows:
when the video is played, except for the effect improvement brought by slow motion (such as the moment when athletes cross the hurdles) in individual time periods, the slow motion of a common scene enables the rhythm of the whole video to be slow, the appreciation to be reduced, and the slow motion effect of a main time sequence is not brought to be prominent.
In order to solve at least part of the above technical problems, the present application discloses a video recording method and an electronic device, which are used for generating an output video with different playing effects (e.g., a normal playing effect of a part of video clips + a slow motion effect of another part of video clips) based on a video capture event.
Referring to fig. 1, a flowchart of a video recording method disclosed in the present application is shown, where the method is applicable to, but not limited to, an electronic device having a display screen and at least one image capturing device, such as a smart terminal, such as a mobile phone, a tablet computer, etc., having a display screen and at least one image capturing device.
As shown in fig. 1, in this embodiment, the video recording method includes:
and step 101, responding to the obtained video recording starting instruction, acquiring video images through an image acquisition device and storing at least part of the acquired video images as video frame images of a first video determined based on a first frame rate.
In this application, the electronic device includes at least one image capturing device, and accordingly may be a single-shot (having a single camera) device or a multi-shot device (e.g., dual-shot).
The first frame rate is preferably a frame rate capable of enabling the first video to have a normal playing effect, which is usually 30fps, and the first video having the normal playing effect means that when the first video is played, the motion effect of the object output by the video image is consistent with the motion effect of the object in the actual scene, and no fast motion or slow motion effect exists, and meanwhile, the smoothness of the video image picture in the playing process can be ensured.
In the embodiment of the application, when the user triggers the instruction for starting video recording by operating the electronic equipment, for example, the user triggers the instruction by pressing the video acquisition control on the operation interface of the mobile phone camera, or a voice instruction for indicating video acquisition is initiated to the mobile phone, and the electronic device responds to the instruction, and starts to acquire and store the video frame image of the first video through the image acquisition device by default, that is, in a video capture event, initially, image capture of a video corresponding to a normal play effect is performed by default, at least part of the collected video images are used as video frames of the first video, the first video with the first frame rate is correspondingly generated based on the at least part of the video images, the first video is directly stored according to the first frame rate, that is, on the time axis of the stored first video, video frame images at the first frame rate number per second correspond.
When the image capturing device captures video images to use at least a part of the video images as video frame images of the first video, in an embodiment, the video image capturing may be performed at a first frame rate, such as 30fps, that is, the image capturing device captures video images at the first frame rate number per second, and in this embodiment, the frame rate based on which the video image capturing is performed is consistent with the frame rate required by the first video, so that all the captured video images can be directly used as video frame images of the first video, and the video image sequence captured at the first frame rate can be directly used as a video stream of the first video, that is, the first video at the first frame rate can be obtained.
In another embodiment, the video image capturing may be performed at a second frame rate higher than the first frame rate, in which the frame rate at which the video image capturing is performed is not consistent with the frame rate required by the first video, and a part of all the captured video images may be selected as video frame images of the first video, preferably, frames may be dropped at intervals relatively uniformly according to a multiple relationship between the second frame rate at the time of capturing and the first frame rate required by the first video, for example, if the first frame rate is 30fps and the second frame rate is 120fps, 1 frame may be selected as a video frame image of the first video every 3 frames, that is, 3 frames are dropped in every 4 frames, and time information of each video image remaining after frame dropping on the time axis is maintained, and a video image sequence formed by each video image remaining after frame dropping may be used as a video stream of the first video, and correspondingly obtaining a first video with a first frame rate.
And 102, displaying the video image through a display screen in the process of acquiring the video image through the image acquisition device.
In the process of acquiring the video image through the image acquisition device, a preview picture of the acquired video is displayed in real time through a display screen of the electronic equipment.
Step 103, in the process of acquiring the video image by the image acquisition device, if the start point calibration information is obtained, obtaining a video frame image as a second video based on the start point calibration information until the end point calibration information is obtained, wherein the second video is determined based on a second frame rate.
Wherein the second frame rate is different from the first frame rate.
For example, the second frame rate may be higher than the first frame rate, or lower than the first frame rate, which is not limited herein.
The embodiments of the present application will describe the solution of the present application by taking the second frame rate higher than the first frame rate as an example. In the case that the second frame rate is higher than the first frame rate, the second frame rate is correspondingly a frame rate capable of providing the second video with a slow motion playing effect, such as 60fps or 120 fps.
In order to generate output videos with different playing effects (such as a normal playing effect of a portion of video clips + a slow motion effect of another portion of video clips) based on one video capturing event, the embodiment of the present application supports that during capturing of video images, a calibration action of a start point/end point is performed, and start point/end point calibration information generated based on the action is used as indication information (notification information) for indicating start/end of obtaining video frame images as second videos, so as to generate the second video at a second frame rate based on the obtained video frame images corresponding to the start point calibration information and the end point calibration information.
The above-mentioned calibration action for executing the starting point/the ending point can have two implementation manners:
1) manual calibration
In this embodiment, the user may manually perform the start point/end point scaling action based on a preview image displayed in real time on the display screen during the video image capture process (in synchronization with the actual behavior of the object in the captured environment).
For example, when a user perceives that there is video image content that needs to be highlighted based on a slow motion effect based on a preview picture, such as a basketball shooting moment, a football passing moment, a water drop falling moment, and the like, a control for triggering switching to slow motion shooting or a special calibration control displayed on the preview picture is pressed in real time to achieve starting point calibration, or a starting point calibration instruction is sent to the electronic device based on any one or more input modes such as gestures, voice, and the like.
In the manual calibration mode, the electronic device can obtain the initial point calibration information by detecting the first target operation information of the user. The first target operation information of the user may be, but is not limited to, operation information generated when the user operates any one of the above-mentioned controls, or gesture information and/or voice information of the user for starting point calibration. The starting point calibration information obtained by the electronic device based on the detection of the first target operation information may be a time information (for example, a time corresponding to the detection of the first target operation information by the electronic device), or calibration information triggered based on a dedicated calibration control, which is not limited in this respect.
Correspondingly, the electronic device can obtain the end point calibration information by detecting the second target operation information of the user. The second target operation information and the end point calibration information are similar to the first target operation information and the start point calibration information, respectively, and are not described in detail.
2) Automatic calibration
In the embodiment, the electronic equipment automatically detects and identifies the shooting object and/or the posture behavior of the shooting object by carrying out image analysis on the acquired video image.
If a target main object in the video image or a target behavior of a main object in the current video image is detected, automatically performing starting point calibration and generating corresponding starting point calibration information so as to trigger/inform the electronic equipment to obtain a video image of a video frame image used as a second video; and if the target main body object disappears from the video image or the behavior of the main body object in the video image is cut off from the target behavior, automatically performing end point calibration and generating corresponding end point calibration information to trigger/inform the electronic equipment to end obtaining the video image of the video frame image as the second video.
The target main body object can be an object needing important highlighting in a video recording scene, such as an athlete in a hurdle match, raindrops in the rain, falling snowflakes and the like; the target behavior of the main object can be correspondingly the posture behavior of the main object needing to be highlighted in the video recording, such as the moving action in the air at the moment when a basketball is shot or the moment when a football is passed through by a person, the falling action at the moment when water drops fall, and the like.
In implementation, sample data with a large data volume can be collected in advance, for example, individual video/video clips are collected as a series of samples, and deep learning is performed on the collected samples, so as to automatically learn object features and/or posture and behavior features of an object in the samples, thereby enabling the electronic device to have the capability of automatically detecting and recognizing a shooting object and/or posture and behavior of the shooting object.
In response to obtaining the start point calibration information, the electronic device starts obtaining a video image that is a video frame image of the second video, and generates the second video at the second frame rate based on the obtained video image.
When the video image of the video frame image as the second video is obtained, the video image is collected by the image collecting device, and at least the collected video image is used as the video frame image of the second video.
In the case where the electronic device includes only a single image capturing device, in view of the characteristic that the image capturing device of the electronic device cannot change the frame rate online, in order to ensure seamless connection between the obtained second video and the first video, the video stream is not interrupted, and the frame rate at which the video image capturing of the second video is performed is the same as the frame rate at which the video image capturing of the first video is performed.
In one embodiment, the video image capturing frame rates of the first video and the second video are both the first frame rate, such as 30fps, and the second video with the second frame rate (such as 120fps) can be obtained by performing frame interpolation processing on the captured video images for the video frame images of the second video. In another embodiment, the video image capturing frame rates of the first video and the second video are both the second frame rate (e.g. 120fps), at this time, the video image sequence captured based on the starting point calibration information and the ending point calibration information is directly used as the video stream of the second video, that is, the second video of the second frame rate can be obtained, and for the first video of the first frame rate, the frame dropping processing is performed to obtain the first video of the first frame rate.
In the case that the electronic device includes at least two image capturing devices, in response to the starting point calibration information, directly capturing video images at the second frame rate may be achieved by switching the image capturing devices, and obtaining a second video at the second frame rate by using the captured video image sequence as a video stream of the second video.
Step 104, in response to the obtained instruction for ending video recording, ending video recording to obtain an output video generated based on the first video and the second video, wherein the output video comprises a video frame image of the first video and a video frame image of the second video,
when a user presses a control for indicating ending of video recording or executes gesture and/or voice operation for indicating ending of video recording, the electronic equipment correspondingly obtains a video recording ending instruction.
In response to the video recording ending instruction, the electronic equipment ends video recording and obtains an output video generated based on the first video and the second video.
And in the playing process of the output video, the motion effect of the video frame image output object of the first video is different from the motion effect of the video frame image output object of the second video.
Specifically, in the process of playing the output video, the first video and the second video are both played at the same frame rate, for example, the first frame rate.
Since the first video itself is a video determined based on the first frame rate, as in the generated first video, the time axis at each second is controlled to correspond to 30 images (and coincide with the time of the actual behavior action of the object), and thus, when playing is performed based on the first frame rate, the motion effect of the video frame image output object of the first video is a normal effect, i.e., consistent with the running effect of the object in the actual environment, and since the second video is a video determined based on the second frame rate, as in the second video generated, the control corresponds to 120 frames of images at the time axis per second (and coincides with the time of the actual behavior action of the object), and thus, when playing is carried out based on the first frame rate, the object of the video frame image output of the second video presents slow motion effect (such as playing the second video based on 30fps, the time length of the original 1s is prolonged to 4s playing). Therefore, the purpose of generating output videos with different playing effects based on one-time video acquisition event is achieved.
According to the scheme, in one video acquisition event, the video images are acquired and at least part of the video images are stored as video frame images of a first video determined based on a first frame rate, the video frame images of a second video determined based on a second frame rate (different from the first frame rate) are obtained based on starting point calibration information until ending point calibration information is obtained, an output video comprising the video frame images of the first video and the second video is generated, and the motion effect of the first video output aiming at the same object is different from the motion effect of the second video output aiming at the same object in the output video playing process. Therefore, the video with different playing effects is output through one-time video acquisition process, the obtained video effect is not single any more, and the appreciation is improved.
And compared with the mode that video frame images are acquired all the way at a high frame rate in one video acquisition event to realize the slow motion effect, the bandwidth and/or the occupation amount of the storage space of the device are reduced.
Optionally, in an embodiment, the electronic device includes at least two image capturing devices.
In this case, as shown in fig. 2, the video recording method disclosed in the present application may be specifically implemented as:
step 201, in response to the obtained instruction for starting video recording, capturing a video image at a first frame rate by a first image capturing device, and storing the captured video image as a video frame image of a first video.
In order to generate output videos with different playing effects based on one video capture event, this embodiment sets different frame rate configuration parameters for two image capturing devices of an electronic device respectively.
Wherein the first image capturing device is configured to capture video images at a first frame rate (e.g., 30fps) to obtain video with normal play effect, and the second image capturing device is configured to capture video images at a second frame rate (e.g., 60fps or 120fps) to obtain video with slow motion play effect.
When the video recording is started based on the video recording starting instruction, the electronic equipment acquires the video images at the first frame rate by default by adopting the first image acquisition device, and the acquired video image sequence is directly used as the video stream of the first video due to the fact that the frame rate is consistent with the frame rate required by the first video, namely the first video at the first frame rate can be obtained.
Step 202, in the process of acquiring the video image through the first image video acquisition device, displaying the video image through the display screen.
In the process of acquiring the video image through the first image video acquisition device, a preview picture of the acquired video is displayed in real time through a display screen of the electronic equipment.
The user can select whether to execute the corresponding starting point calibration operation based on the image content on the preview picture, such as whether a target main object appears or not, whether a target behavior occurs on the main object on the video image or not, and the like.
Step 203, in the process of acquiring the video image by the first image video acquisition device, if the initial point calibration information is obtained, switching from the first image acquisition device to the second image acquisition device.
When the initial point calibration information is obtained, it indicates that the video frame image of the second video needs to be obtained at present, at this time, the previous video image acquisition by using the first image acquisition device is switched to the video image acquisition by using the second image acquisition device, so as to realize the switching of the acquisition frame rate.
And step 204, acquiring a video image at a second frame rate through a second image acquisition device and storing the video image as a video frame image of a second video until the end point calibration information is obtained.
In the process of acquiring the video image through the second image acquisition device, a preview picture of the acquired video is also displayed in real time through a display screen of the electronic equipment, and the behaviors of the objects in the preview picture are synchronous and consistent with the behaviors of the objects in the actual scene in time.
The user can select whether to perform a corresponding end point calibration operation based on image contents on the preview screen, such as whether the target subject object disappears from the screen, or whether the target behavior of the subject object ends, or the like.
The second image acquisition device acquires video images at a second frame rate, the acquisition frame rate is consistent with the frame rate required by the second video, and accordingly, a video image sequence acquired by the second video acquisition device can be directly used as a video stream of the second video until the end point calibration information is obtained, so that the second video at the second frame rate can be obtained.
In addition, when the end point calibration information is obtained, based on the end point calibration information, the video images are collected by the second image collection device at the second frame rate, and the video images are collected by the first image collection device at the first frame rate until the start point calibration information is obtained again or the video recording instruction is finished. Therefore, when the subsequently generated output video is played, with the progress of the playing progress, the method has the characteristics that the normal playing effect of one segment is switched to the slow motion effect of another segment, and then the slow motion effect of the segment is switched to the normal playing effect of the next segment.
In the output video, the video frame images of the first video and the second video may include the same object or may not include the same object, which is not limited herein. Under the condition that the video frame images of the first video and the second video comprise the same object, the motion effect of the same object output by the video frame image of the first video is different from the motion effect of the same object output by the video frame image of the second video in the playing process of the output video.
For example, if the output video generated by recording for the basketball shooting process includes three video paragraphs, namely a first video 1+ a second video + a first video 2, and the first video 1 corresponds to a beginning part in the basketball shooting track, and the second video corresponds to a main part in the basketball shooting track (i.e. a core part at the shooting moment, including a part from the basketball approaching to the rim to the basketball rim, where the distance between the basketball and the rim does not exceed a distance threshold), and the first video 2 corresponds to an ending part in the basketball shooting track, based on the playing effect corresponding to each video, the output video may be made to present a moving effect from fast to slow (normal playing → slow moving) and then from slow to fast (slow moving → normal playing), so that it is easy to understand the moving effect of the background object corresponding to the basketball (such as the moving behavior of the audience), the synchronization also exhibits the effect of going from fast to slow and then from slow to fast.
Step 205, in response to the obtained instruction for ending video recording, ending video recording to obtain an output video generated based on the first video and the second video, where the output video includes a video frame image of the first video and a video frame image of the second video.
Wherein, the motion effect of the video frame image output object of the first video is different from the motion effect of the video frame image output object of the second video in the output video playing process
It should be noted that, in the process of obtaining the video frame image of the second video based on the obtained starting point calibration information, if the ending video recording instruction is directly obtained under the condition that the ending point calibration information is not obtained, the ending video recording instruction also has the function of the ending point calibration information, and based on the ending video recording instruction, the collection of the video frame image of the second video is also ended.
Referring to fig. 3, the step 205 can be specifically realized by the following processing procedures:
step 301, in the recording process, sequentially inputting each video frame image of the first video and each video frame image of the second video into a video encoder according to a time sequence;
and step 302, the video encoder sequentially performs compression coding processing on each received video frame image of the first video and each received video frame image of the second video according to a first frame rate to obtain an output video.
And in the video recording process, generating an output video in real time based on the obtained video frame image of the first video and the obtained video frame image of the second video.
Specifically, in order to enable the final output video to have at least different video effects of "normal play + slow motion play" during playing, in the recording process, when the output video is generated, a video compression and encoding unit may be used to perform compression encoding processing on the video frame images of the first video and the second video at the same first frame rate.
Specifically, the video compression and encoding unit is preset with a fixed frame rate based on which encoding is performed, where the frame rate is a normal frame rate corresponding to video playing, that is, the first frame rate, and is usually 30 fps.
In the video recording process, a buffer may be adopted to buffer each video frame image of the first video and each video frame image of the second video in sequence according to a time sequence, and each video frame image of the first video and each video frame image of the second video are sequentially input into a video encoder according to the time sequence, and the video encoder performs compression coding processing on the received video stream images (i.e., the video frame images of the first video and the video frame images of the second video that are sequentially input according to the time sequence) according to a set fixed frame rate, that is, the first frame rate, to finally obtain an output video including the video frame images of the first video and the video frame images of the second video.
Thus, when the video frame images of the first video and the second video are subjected to the compression encoding processing based on the set frame rate, the first frame rate of the first video is essentially maintained to compression encode the respective frame images of the first video, that is, in the compression encoding result, the time axis of 1s duration corresponds to the video images of the first frame rate number (e.g., 30 frames), for the second video, the video compression and encoding unit changes the second frame rate corresponding to the second video to the first frame rate, the video image sequence is processed by frame taking and compression coding in sequence, so that after compression coding, the video image sequence has 1 time length on a time axis, the time length corresponding to the "first frame rate/second frame rate" before encoding is equivalent to lengthening the time axis of the second video, thereby achieving the slow motion effect when playing the second video at the first frame rate.
And the final output video is a composite video of one or more first videos with normal playing effect and one or more second videos with slow motion playing effect. Based on the above processing, in the finally obtained output video, the recording objects of different video segments (corresponding to the video segments of the first video and the second video, respectively) of the output video respectively correspond to the playing time length and the time length actually experienced in the real world, which can be specifically referred to as the example in fig. 4.
Based on the embodiment, the output videos with different playing effects can be generated through one-time video acquisition event, wherein a normal playing effect is presented in a common scene, and a slow motion effect is presented in a main body time sequence needing to highlight a main body object, so that the slow motion effect of the main body time sequence is highlighted, the appreciation is improved, and the video images do not need to be acquired in a high frame rate in the whole process, so that the bandwidth of the equipment and/or the occupation amount of the storage space are correspondingly reduced.
Referring to the flowchart of the video recording method shown in fig. 5, in the case that the electronic device includes at least two image capturing devices, optionally, the video recording method further includes, before step 301, when generating the output video:
step 501, based on the video frame image of the first video, performing picture calibration on at least part of the video frame image of the second video.
In order to reduce the picture jump when the image capturing device switches, in the embodiment, before the first video and the second video are compressed and encoded to generate the output video, the picture calibration process is performed on at least part of the video frame images of the second video based on the video frame images of the first video.
The frame alignment of the video frame images of the two videos can be realized based on, but not limited to, a spatial position alignment method, or a real-time feature point matching, perspective transformation method, and the like.
For example, pixel position mapping information between images respectively acquired by the first image acquisition device and the second image acquisition device is predetermined, and based on the pixel position mapping information, calibration processing and the like are performed on each frame image at the head position of the second video at least according to the image video at the tail position of the first video, so as to avoid sudden change of pictures when the video images of the two videos are switched.
The calibration process is carried out in real time in the recording process, and the video frame images of the calibrated second video are synchronously sent to the display module and the coding module. Therefore, in the double-shot scheme, when the first image acquisition device is switched to the second image acquisition device for video image acquisition and the display screen is correspondingly switched to display the preview image acquired by the second image acquisition device, the displayed preview image is the video image of the calibrated second video, and the sudden change of the image caused by the switching between the image acquisition devices is avoided.
Optionally, in an embodiment, the electronic device includes a single image capturing apparatus.
In this case, as shown in fig. 6, the video recording method disclosed in the present application may be specifically implemented as:
step 601, in response to the obtained instruction for starting video recording, acquiring a video image at a first frame rate through an image acquisition device, and storing the acquired video image as a video frame image of a first video.
Step 601 is the same as step 201, and specific reference may be made to the description of step 201, which is not described in detail.
Step 602, in the process of collecting the video image by the image video collecting device, displaying the video image by the display screen.
Step 603, in the process of acquiring the video image by the image video acquisition device, if the initial point calibration information is obtained, the image acquisition device is obtained with the moment corresponding to the initial point calibration information as the initial point, and the video image acquired at the first frame rate is obtained until the end point calibration information is obtained.
Step 604, performing frame interpolation on video images acquired between a moment corresponding to the starting point calibration information as a starting point and a moment corresponding to the end point calibration information as an end point to obtain a second video with a second frame rate;
the electronic device only has one image acquisition device, and the acquisition frame rate cannot be changed online, so that when the initial point calibration information is obtained, the first frame rate is still maintained for video image acquisition, and different from before the initial point calibration information is obtained, the acquired video image is used as a video frame image of the second video.
Since the first frame rate is lower than a second frame rate required by the second video, in order to enable the second video to have a slow motion effect in the output video, frame interpolation processing may be performed on a video image sequence acquired in a time period in which a time corresponding to the start point calibration information is a start point and a time corresponding to the end point calibration information is an end point, so as to obtain the second video at the second frame rate.
For example, a video sequence of 120fps (i.e., the second video) is obtained by inserting 3 frames of video images between every two adjacent frames of video images of a 30fps video image sequence.
The video image required by frame interpolation can be obtained by performing pixel processing on the acquired video image by utilizing a corresponding frame interpolation algorithm.
Step 605, in response to the obtained instruction for ending video recording, ending video recording to obtain an output video generated based on the first video and the second video, where the output video includes a video frame image of the first video and a video frame image of the second video.
And in the playing process of the output video, the motion effect of the video frame image output object of the first video is different from the motion effect of the video frame image output object of the second video.
Step 605 is the same as step 205, and reference may be made to the description of step 205, which is not described in detail.
It should be noted that, in this embodiment, the video frame images of the first video and the video frame images obtained after the frame interpolation of the second video, which are collected and cached in chronological order, are sequentially input to the video compression and encoding unit.
In the embodiment, for the case that the electronic device only includes a single image capturing device, by performing frame interpolation processing on the video image sequence corresponding to the second video captured at the low frame rate, a slow motion playing effect of a portion corresponding to the second video in the output video can be realized.
In addition, optionally, for a case that the electronic device includes a single image capturing apparatus, the required purpose may also be achieved through frame dropping processing, referring to fig. 7, where this embodiment specifically corresponds to the following implementation process:
and 701, responding to the obtained video recording starting instruction, and acquiring a video image at a second frame rate through the image acquisition device.
In this embodiment, the acquisition frame rate of the image acquisition device is set to the second frame rate, i.e., to the frame rate required for the slow-motion second video, such as 120 fps.
Therefore, when the instruction for starting video recording is obtained, the image acquisition device acquires video images at the second frame rate.
And 702, performing frame loss processing on the video image acquired at the second frame rate, and taking and storing the residual frame image after frame loss as the video frame image of the first video.
And then, performing frame loss processing on the video image sequence acquired at the second frame rate to obtain a first video at the first frame rate.
Specifically, frame dropping may be performed relatively uniformly on the captured video image sequence at intervals according to a multiple relationship between the second frame rate during capturing and the first frame rate required by the first video, for example, if the first frame rate is 30fps and the second frame rate is 120fps, 1 frame may be selected as a video frame image of the first video every 3 frames, that is, 3 frames are dropped in every 4 frames, and time information of each video image remaining after frame dropping on the time axis is maintained, and a video image sequence formed by each video image remaining after frame dropping may be used as a video stream of the first video, so as to obtain the first video at the first frame rate correspondingly.
And 703, displaying the video image through a display screen in the process of acquiring the video image through the image video acquisition device.
Step 704, in the process of acquiring the video image by the image video acquisition device, if the initial point calibration information is obtained, the image acquisition device takes the moment corresponding to the initial point calibration information as the initial point, and acquires the video image at the second frame rate until the end point calibration information is obtained, so as to obtain a second video;
since the frame rate of the image video acquisition device when acquiring the video image is the second frame rate, which is the same as the frame rate required by the second video, the video image sequence acquired between the moment corresponding to the start point calibration information as the start point and the moment corresponding to the end point calibration information as the end point can be directly used as the second video.
Step 705, in response to the obtained instruction for ending video recording, ending video recording to obtain an output video generated based on the first video and the second video, where the output video includes a video frame image of the first video and a video frame image of the second video.
And in the playing process of the output video, the motion effect of the video frame image output object of the first video is different from the motion effect of the video frame image output object of the second video.
Step 705 is the same as step 205, and reference may be made to the description of step 205, which is not described in detail.
It should be noted that, in this embodiment, the video compression and encoding unit sequentially inputs the video frame images of the first video and the video frame images of the second video, which are acquired and cached after frame dropping, in chronological order.
In the embodiment, for the case that the electronic device only includes a single image capturing device, the video image capturing of the first video and the second video is performed according to the high frame rate, and the frame dropping processing is performed on the video image sequence corresponding to the first video captured at the high frame rate, so that the purpose of presenting different playing effects when the output video is played can be achieved.
In addition, in each embodiment of the present application, the output video has different playing effects through corresponding processing (such as switching of an image capturing device or frame insertion/frame dropping, and encoding compression and composition of the video) at the shooting end, so that when the output video is played, no editing processing (such as editing the playing frame rate) needs to be performed.
In addition, the present application also discloses an electronic device, as shown in a schematic structural diagram of the electronic device shown in fig. 8, the electronic device includes:
a display screen 801;
an image acquisition device 802;
a memory 803 for storing at least one set of instructions;
the set of computer instructions may be embodied in the form of a computer program.
The memory 803 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 804 is configured to call and execute the instruction set in the memory, and implement the video recording method disclosed in any of the above method embodiments by executing the instruction set.
The processor 804 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic devices.
Besides, the electronic device may further include an input device, a communication interface, a communication bus, and the like. The memory, the processor and the communication interface communicate with each other via a communication bus.
The communication interface is used for communication between the electronic device and other devices. The communication bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like, and may be divided into an address bus, a data bus, a control bus, and the like.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
For convenience of description, the above system or apparatus is described as being divided into various modules or units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A video recording method, comprising:
acquiring video images through an image acquisition device in response to the obtained video recording starting instruction, and storing at least part of the acquired video images as video frame images of a first video determined based on a first frame rate;
in the process of acquiring the video image through the image acquisition device, displaying the video image through a display screen;
in the process of acquiring the video image through the image acquisition device, if the starting point calibration information is obtained, obtaining a video frame image serving as a second video based on the starting point calibration information until the end point calibration information is obtained, wherein the second video is determined based on a second frame rate; wherein the first frame rate is different from the second frame rate;
ending video recording in response to the obtained instruction for ending video recording, and obtaining an output video generated based on the first video and the second video, wherein the output video comprises a video frame image of the first video and a video frame image of the second video,
and in the playing process of the output video, the motion effect of the video frame image output object of the first video is different from the motion effect of the video frame image output object of the second video.
2. The method of claim 1, wherein obtaining video frame images as a second video based on the start point calibration information until end point calibration information is obtained, comprises:
switching from a first image capture device to the second image capture device; the first image acquisition device is an image acquisition device for acquiring video images at the first frame rate;
capturing video images at a second frame rate by a second image capturing device and storing the video images as video frame images of a second video until end point calibration information is obtained.
3. The method of claim 2, further comprising:
and switching from the second image acquisition device to the first image acquisition device based on the end point calibration information until the start point calibration information is obtained again or the end video recording instruction is obtained.
4. The method of claim 1, the capturing video images by an image capture device and storing at least a portion of the captured video images as video frame images of a first video determined based on a first frame rate, comprising:
acquiring a video image at the first frame rate through an image acquisition device;
all video images collected at a first frame rate are used as video frame images of a first video and stored;
the obtaining of the video frame image as the second video based on the start point calibration information until obtaining the end point calibration information includes:
acquiring the moment corresponding to the initial point calibration information of the image acquisition device as an initial point, and taking the video image acquired at the first frame rate as a video frame image of a second video until the end point calibration information is acquired;
wherein the second video is: and performing frame interpolation on the video frames acquired by the image acquisition device based on the starting point calibration information and the end point calibration information at a first frame rate to form a second video frame rate after frame interpolation, wherein the second frame rate is the second frame rate.
5. The method of claim 1, the capturing, by an image capture device, a video image and storing the video image as a video frame image of a first video determined based on a first frame rate, comprising:
acquiring a video image at the second frame rate through an image acquisition device;
performing frame loss processing on the video image acquired at the second frame rate, and taking and storing the residual frame image after frame loss as the video frame image of the first video;
the obtaining of the video frame image as the second video based on the start point calibration information until obtaining the end point calibration information includes:
and obtaining the image acquisition device by taking the moment corresponding to the initial point calibration information as an initial point and taking the video image acquired at the second frame rate as a video frame image of the second video until the end point calibration information is obtained.
6. The method of claim 1, the obtaining starting point calibration information, comprising:
detecting first target operation information;
or, the target behavior of the target subject object appearing in the video image or the subject object in the current video image is detected.
7. The method of claim 1, the obtaining end point calibration information comprising:
detecting second target operation information;
or, it is detected that the target subject object disappears from the video image or that the behavior of the subject object in the video image cuts away from the target behavior.
8. The method of claim 1, wherein generating an output video based on the first video and the second video comprises:
in the recording process, sequentially inputting each video frame image of the first video and each video frame image of the second video into a video compression and encoding unit according to the time sequence;
and the video compression and coding unit sequentially performs compression coding processing on each received video frame image of the first video and each received video frame image of the second video according to a first frame rate to obtain an output video.
9. The method of claim 8, further comprising, before sequentially inputting the video frame images of the second video to a video compression and encoding unit in chronological order:
performing picture alignment on at least a portion of the video frame images of the second video based on the video frame images of the first video.
10. An electronic device, comprising:
a display screen;
an image acquisition device;
a memory for storing at least one set of instructions;
a processor for calling and executing said set of instructions in said memory, said processor implementing said noise reduction processing method of any of claims 1-9 by executing said set of instructions.
CN202110349654.5A 2021-03-31 2021-03-31 Video recording method and electronic equipment Active CN113067994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349654.5A CN113067994B (en) 2021-03-31 2021-03-31 Video recording method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349654.5A CN113067994B (en) 2021-03-31 2021-03-31 Video recording method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113067994A true CN113067994A (en) 2021-07-02
CN113067994B CN113067994B (en) 2022-08-19

Family

ID=76564961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349654.5A Active CN113067994B (en) 2021-03-31 2021-03-31 Video recording method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113067994B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520874A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
WO2022127839A1 (en) * 2020-12-18 2022-06-23 北京字跳网络技术有限公司 Video processing method and apparatus, device, storage medium, and computer program product
CN114679622A (en) * 2022-03-08 2022-06-28 臻迪科技股份有限公司 Video file generation method, device, equipment and medium
CN115242992A (en) * 2021-08-12 2022-10-25 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN116156250A (en) * 2023-02-21 2023-05-23 维沃移动通信有限公司 Video processing method and device
CN117014686A (en) * 2022-04-29 2023-11-07 荣耀终端有限公司 Video processing method and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245771A (en) * 2014-07-01 2016-01-13 苹果公司 Mobile camera system
CN107454322A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Photographic method, device, computer can storage medium and mobile terminals
CN110086905A (en) * 2018-03-26 2019-08-02 华为技术有限公司 A kind of kinescope method and electronic equipment
CN110636276A (en) * 2019-08-06 2019-12-31 RealMe重庆移动通信有限公司 Video shooting method and device, storage medium and electronic equipment
CN110868560A (en) * 2018-08-27 2020-03-06 青岛海信移动通信技术股份有限公司 Video recording method based on binocular camera and terminal equipment
CN110933315A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Image data processing method and related equipment
CN112422863A (en) * 2019-08-22 2021-02-26 华为技术有限公司 Intelligent video recording method and device
CN112532903A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Intelligent video recording method, electronic equipment and computer readable storage medium
CN112532880A (en) * 2020-11-26 2021-03-19 展讯通信(上海)有限公司 Video processing method and device, terminal equipment and storage medium
CN112532865A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Slow-motion video shooting method and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245771A (en) * 2014-07-01 2016-01-13 苹果公司 Mobile camera system
CN107454322A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Photographic method, device, computer can storage medium and mobile terminals
CN110086905A (en) * 2018-03-26 2019-08-02 华为技术有限公司 A kind of kinescope method and electronic equipment
CN110868560A (en) * 2018-08-27 2020-03-06 青岛海信移动通信技术股份有限公司 Video recording method based on binocular camera and terminal equipment
CN110636276A (en) * 2019-08-06 2019-12-31 RealMe重庆移动通信有限公司 Video shooting method and device, storage medium and electronic equipment
CN112422863A (en) * 2019-08-22 2021-02-26 华为技术有限公司 Intelligent video recording method and device
CN112532903A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Intelligent video recording method, electronic equipment and computer readable storage medium
CN112532865A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Slow-motion video shooting method and electronic equipment
CN110933315A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Image data processing method and related equipment
CN112532880A (en) * 2020-11-26 2021-03-19 展讯通信(上海)有限公司 Video processing method and device, terminal equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127839A1 (en) * 2020-12-18 2022-06-23 北京字跳网络技术有限公司 Video processing method and apparatus, device, storage medium, and computer program product
CN115242992A (en) * 2021-08-12 2022-10-25 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115242992B (en) * 2021-08-12 2023-08-18 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN114520874A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114520874B (en) * 2022-01-28 2023-11-24 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114679622A (en) * 2022-03-08 2022-06-28 臻迪科技股份有限公司 Video file generation method, device, equipment and medium
CN117014686A (en) * 2022-04-29 2023-11-07 荣耀终端有限公司 Video processing method and electronic equipment
CN116156250A (en) * 2023-02-21 2023-05-23 维沃移动通信有限公司 Video processing method and device

Also Published As

Publication number Publication date
CN113067994B (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN113067994B (en) Video recording method and electronic equipment
CN107613235B (en) Video recording method and device
CN111654629B (en) Camera switching method and device, electronic equipment and readable storage medium
CN107295284B (en) Method and device for generating, retrieving and playing video file consisting of audio and picture
US8314856B2 (en) Imaging apparatus, variable speed imaging method, and recording medium storing program thereof
CN108632676B (en) Image display method, image display device, storage medium and electronic device
US8411158B2 (en) Image sensing apparatus and storage medium
CN108184165B (en) Video playing method, electronic device and computer readable storage medium
JP5456023B2 (en) Image photographing apparatus, image photographing method, program, and integrated circuit
WO2022111198A1 (en) Video processing method and apparatus, terminal device and storage medium
CN108965705B (en) Video processing method and device, terminal equipment and storage medium
US20110064129A1 (en) Video capture and generation at variable frame rates
KR101948692B1 (en) Phtographing apparatus and method for blending images
US9703461B2 (en) Media content creation
CN104918123A (en) Method and system for playback of motion video
CN113949893A (en) Live broadcast processing method and device, electronic equipment and readable storage medium
US9813639B2 (en) Image processing device and control method for the same for applying a predetermined effect to a moving image
CN110913118B (en) Video processing method, device and storage medium
US11622099B2 (en) Information-processing apparatus, method of processing information, and program
US10468029B2 (en) Communication terminal, communication method, and computer program product
JP2020524450A (en) Transmission system for multi-channel video, control method thereof, multi-channel video reproduction method and device thereof
US20120219264A1 (en) Image processing device
JP2012010133A (en) Image processing apparatus and image processing program
CN112887515A (en) Video generation method and device
JP2011119934A (en) Image shooting device and image shooting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant