CN112532880B - Video processing method and device, terminal equipment and storage medium - Google Patents
Video processing method and device, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN112532880B CN112532880B CN202011353666.7A CN202011353666A CN112532880B CN 112532880 B CN112532880 B CN 112532880B CN 202011353666 A CN202011353666 A CN 202011353666A CN 112532880 B CN112532880 B CN 112532880B
- Authority
- CN
- China
- Prior art keywords
- frame rate
- image set
- frame
- image
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application discloses a video processing method, a video processing device, terminal equipment and a storage medium, wherein the method comprises the following steps: when a moving object exists in a shot picture determined based on motion data of pixel points of multi-frame preview images, recording the shot picture at a first frame rate to respectively obtain a first image set of a first time period and a second image set of a second time period; performing time domain filtering on the first image set to obtain a third image set at a second frame rate; the second frame rate is less than the first frame rate; performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate; and respectively coding the third image set and the fourth image set at a second frame rate to obtain a slow motion video. The embodiment of the application processes the image set recorded by the terminal equipment, so that a smooth and complete slow motion video can be obtained.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, a terminal device, and a storage medium.
Background
The slow motion video refers to high-speed shooting video, namely shooting video containing instantaneous motion invisible to naked eyes in a short time, such as a motion state when a bullet flies out of a chamber; as another example, the course of a football player's shot, etc. However, since high-speed photographing apparatuses supporting the photographing of slow motion videos are expensive, general users are generally unable to record the slow motion videos. Therefore, how to record slow motion video by using terminal equipment is an important research topic in video processing technology.
Disclosure of Invention
The embodiment of the application provides a video processing method and device, terminal equipment and a storage medium.
In a first aspect, the present application provides a video processing method, including:
when a moving object exists in a shot picture determined based on motion data of pixel points of multiple frames of preview images, recording the shot picture at a first frame rate, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period;
performing time domain filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is less than the first frame rate;
performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
and respectively encoding the third image set and the fourth image set at the second frame rate to obtain a slow motion video.
With reference to the first aspect, in some possible embodiments, the method further comprises:
sampling an original image recorded in the shooting picture at the first frame rate to obtain a multi-frame preview image at the second frame rate;
and calculating the motion data of each pixel point of the multi-frame preview image by adopting a motion vector method.
With reference to the first aspect, in some possible implementations, the calculating the motion data of each pixel point by using a motion vector method includes:
acquiring a pixel point of the multi-frame preview image in the shooting picture;
and carrying out vector operation on the pixel coordinates of the pixel point in the multi-frame preview image to obtain the displacement vector of the pixel point.
With reference to the first aspect, in some possible embodiments, when the displacement vectors of the respective pixel points are the same, no moving object exists in the captured picture; and when the displacement vector of each pixel point has a plurality of values, a moving object exists in the shot picture.
With reference to the first aspect, in some possible implementations, performing temporal filtering on the first image set to obtain a third image set at a second frame rate includes:
continuously sampling the first image set according to continuous sampling parameters to obtain a plurality of frames of images to be fused, wherein the continuous sampling parameters comprise continuous sampling frequency and continuous sampling frame number, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
and fusing the plurality of frames of images to be fused by the continuous sampling frame number to obtain a third image set at the second frame rate.
With reference to the first aspect, in some possible implementations, the performing frame interpolation on the second image set to obtain a fourth image set at a third frame rate includes:
predicting intermediate frame images between images of every two adjacent frames in the second image set, the number of the intermediate frame images being determined based on the first frame rate and the third frame rate;
and for an intermediate frame image between every two adjacent frames of images, inserting the intermediate frame image between the two adjacent frames of images to obtain a fourth image set of the third frame rate.
With reference to the first aspect, in some possible embodiments, the method further comprises:
configuring recording parameters, wherein the recording parameters comprise the first frame rate, the second frame rate and the third frame rate.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the recording unit is used for recording a shooting picture at a first frame rate when a moving object exists in the shooting picture based on motion data of pixel points of multi-frame preview images, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period;
the time domain filtering unit is used for performing time domain filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is less than the first frame rate;
the frame interpolation processing unit is used for performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
and the encoding unit is used for encoding the third image set and the fourth image set respectively at the second frame rate to obtain a slow motion video.
With reference to the second aspect, in some possible embodiments, the apparatus further includes a calculating unit, configured to sample an original image recorded at the first frame rate in the captured picture to obtain multiple frames of preview images at the second frame rate; and calculating the motion data of each pixel point of the multi-frame preview image by adopting a motion vector method.
With reference to the second aspect, in some possible embodiments, the calculating unit calculates the motion data of each pixel point by using a motion vector method, including:
acquiring a pixel point of the multi-frame preview image in the shooting picture;
and carrying out vector operation on the pixel coordinates of the pixel point in the multi-frame preview image to obtain the displacement vector of the pixel point.
With reference to the second aspect, in some possible embodiments, when the displacement vectors of the respective pixel points are the same, no moving object exists in the captured picture;
and when the displacement vector of each pixel point has a plurality of values, a moving object exists in the shot picture.
With reference to the second aspect, in some possible embodiments, the temporal filtering unit is configured to perform temporal filtering on the first image set to obtain a third image set at a second frame rate, and includes:
continuously sampling the first image set according to continuous sampling parameters to obtain a plurality of frames of images to be fused, wherein the continuous sampling parameters comprise continuous sampling frequency and continuous sampling frame number, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
and fusing the plurality of frames of images to be fused by the continuous sampling frame number to obtain a third image set at the second frame rate.
With reference to the second aspect, in some possible embodiments, the frame interpolation processing unit is configured to perform frame interpolation processing on the second image set to obtain a fourth image set at a third frame rate, and includes:
predicting intermediate frame images between images of every two adjacent frames in the second image set, the number of the intermediate frame images being determined based on the first frame rate and the third frame rate;
and for an intermediate frame image between every two adjacent frames of images, inserting the intermediate frame image between the two adjacent frames of images to obtain a fourth image set of the third frame rate.
With reference to the second aspect, in some possible embodiments, the apparatus further includes a configuration unit, configured to configure recording parameters, where the recording parameters include the first frame rate, the second frame rate, and the third frame rate.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a processor and a memory, where the processor is connected to the memory, where the memory is used to store a program code, and the processor is used to call the program code to execute the video processing method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the video processing method of the first aspect.
When the image set with the low frame rate acquired by the terminal equipment is processed, the image set with the high frame rate can be obtained by predicting the intermediate frame image between the images of two adjacent frames and inserting the intermediate frame image between the images of the two adjacent frames. And encoding and outputting the image set with the high frame rate to obtain the slow motion video with the slow rate. In addition, before the frame interpolation processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after the temporal filtering processing to obtain a video with a constant rate. Thus, the final slow motion video includes both constant rate video and slower rate video, with the effect of varying rates. Moreover, the fluency of the video obtained by time-domain filtering can be ensured by weighted fusion. Therefore, through the scheme of the embodiment of the application, a complete and smooth slow motion video can be obtained through common terminal equipment. Therefore, the user can record the required slow motion video more freely, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of time-domain filtering provided by an embodiment of the present application;
fig. 4 is a schematic diagram of an interpolation process provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of another video processing method provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another terminal device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Generally, an image pickup device (such as an image sensor) among portable terminal devices on the market picks up video images at recording frame rates of 30 Frames Per Second (fps), 60fps, 120fps, and 240 fps. However, slow motion video images often require high frame rate image acquisition devices, and ordinary terminal devices cannot record the slow motion video images. In order to obtain slow motion video images at high frame rates, such as 480fps, 960fps, etc., terminal devices need to be configured with dedicated image sensors, such as Fast Readout Sensors (FRS). However, the cost of a terminal device equipped with a dedicated image sensor is high, and an ordinary user cannot record slow motion video using an ordinary terminal device. Therefore, in order to solve the above problem, embodiments of the present application provide a video processing system and a video processing scheme that can complete slow motion video production in a general terminal device.
Referring to fig. 1, fig. 1 shows a system architecture diagram of a terminal device. The terminal device may include, but is not limited to, a smart phone, a laptop computer, a tablet computer, a desktop computer, and the like. In the terminal device, the method may include: the system comprises an image acquisition module 101, a hardware driving module 102, an image processing module 103 and an encoding module 104. The image capturing module 101 is configured to record an image in a shooting screen, and the image capturing module 101 may be an image sensor (sensor). The hardware driver module 102 may be an Image Signal Processing driver (ISP driver). The hardware driver module 102 is configured to control the recording of the image capturing module 101, for example, the hardware driver module 102 controls the image capturing module 101 to record an image set of a time period at a first frame rate. The image processing module 103 may be a hardware abstraction layer (HAL layer), and the image processing module 103 is configured to process the recorded image set. The encoding module 104 may be an encoder supporting the VSP protocol and may be used to encode the image set processed by the image processing module 103. In a specific implementation, the video processing scheme provided in the embodiment of the present application specifically includes: when the image processing module 103 determines that a moving object exists in the captured image based on the motion data of the pixel points of the multiple frames of preview images, the image processing module 103 configures recording related parameters (e.g., a first frame rate, a second frame rate, a third frame rate, etc.) for the hardware driving module 102; the hardware driving module 102 controls the image acquisition module 101 to record the shot picture at a first frame rate, and respectively obtains a first image set of a first time period and a second image set of a second time period; then, the image processing module 103 performs time-domain filtering on the first image set to obtain a third image set at a second frame rate, and performs frame interpolation on the second image set to obtain a fourth image set at the third frame rate; finally, the encoding module 104 encodes the third image set and the fourth image set respectively to obtain the slow motion video.
Therefore, in the video processing scheme provided by the embodiment of the application, the acquired low frame rate image set is converted into the fourth image set with the high frame rate by using frame interpolation processing, and before the frame interpolation processing, time-domain filtering processing is performed, so that the slow motion video has the effect of rate change, and the time-domain filtering can ensure the fluency of the rate change in the slow motion video. Therefore, the scheme provided by the embodiment of the application can record complete and smooth slow motion video in the terminal equipment.
Based on the above description of the video processing scheme, the embodiment of the present application provides a video processing method; the video processing method may be performed by the above-mentioned terminal device. Referring to fig. 2, the video processing method may include the following steps S201 to S204:
s201, when a moving object exists in a shot picture determined based on motion data of pixel points of multi-frame preview images, recording the shot picture at a first frame rate, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period.
In one possible embodiment, the captured pictures are recorded at a first frame rate, and the first image set of the first time period and the second image set of the second time period are obtained respectively after the terminal device detects the trigger event.
The motion detection function of the terminal device has two states: an on motion detection state and an off motion detection state. In one possible embodiment, when the terminal device is in the off motion detection state, the triggering event may be that the terminal device detects that the user clicks a recording button. In another feasible implementation manner, when the terminal device is in the state of starting motion detection, the terminal device obtains motion data of a pixel point in the multi-frame preview image, and the trigger event may be that the terminal device determines that a moving object exists in the shooting picture based on the motion data of the pixel point in the multi-frame preview image.
In a feasible implementation manner, after the terminal device detects the trigger event, the terminal device obtains the recording parameters, and then records the shot pictures according to a first frame rate included in the recording parameters, so as to obtain a first image set of a first time period and a second image set of a second time period respectively.
The recording parameters are configured in advance according to service requirements and experience, and the recording parameters may include the first frame rate, the second frame rate, and the third frame rate mentioned herein. In addition, the recording parameters may also include other parameters required for recording, such as the size of the shot image, the exposure time during shooting, and the like, which is not limited herein.
Wherein the first time period is a time period before the second time period. Optionally, the time duration included in the first time period and the second time period may be set according to business requirements or experience. For example, the first period of time may be set to 1 second and the second period of time to 0.25 second. Furthermore, the duration of the second time period cannot exceed the duration threshold, and the duration threshold is determined according to hardware devices of the terminal device, where the hardware devices include a processing rate of the processor, a size of the memory, and the like.
Optionally, the first frame rate may be directly a maximum recording frame rate of the terminal device. The maximum recording frame rate of the terminal device is related to the hardware device (e.g., image sensor) of the terminal device. For example, assuming that the maximum recording frame rate supported by the terminal device is 240fps, the first frame rate is also 240 fps. Optionally, the first frame rate may also be set according to experience and service requirements. The first frame rate may be less than or equal to a maximum recording frame rate of the terminal device. For example, assuming that the maximum recording frame rate supported by the terminal device is 240fps, the first frame rate may be 240fps, 120fps, and so on.
S202, performing time domain filtering on the first image set to obtain a third image set at a second frame rate; the second frame rate is less than the first frame rate.
In one possible embodiment, the second frame rate is included in the recording parameters. The terminal equipment calculates the ratio of the first frame rate to the second frame rate, and determines the continuous sampling frequency according to the ratio of the first frame rate to the second frame rate. And the terminal equipment acquires the number of the continuous sampling frames and continuously samples the first image set based on the continuous sampling frequency and the number of the continuous sampling frames to obtain a plurality of frames of images to be fused. And fusing a plurality of frames of images to be fused by using the number of continuous sampling frames to obtain a third image set with a second frame rate. The number of continuous sampling frames can be set according to experience and service requirements.
In one possible embodiment, the second frame rate may be equal to the coding rate of the terminal device. Typically, the encoding rate of the terminal device is 30fps, and then the second frame rate may also be 30 fps. It should be understood that the second frame rate may be other frame rates as new service scenarios appear. And are not limited herein.
In a possible embodiment, fusing the images to be fused may refer to performing weighted fusion according to a fusion parameter, where the fusion parameter may be set according to business and experience.
For better illustration of the present solution, the following describes the embodiments of the present application with reference to a specific example. Assuming that the first frame rate is 240fps and the second frame rate is 30fps, the terminal device determines the frequency of continuous sampling to be 8 according to the ratio of the first frame rate and the second frame rate. As shown in fig. 3, assuming that the first time period is 1 second, the terminal device obtains a first image set including 240 images at the first frame rate during the first time period. The number of the continuous sampling frames is set to be 3, and the terminal device can obtain 90 image sets to be fused by continuously sampling 3 frames every 8 frames in 240 image frames. Then, sequentially fusing every 3 frames of the 90 frames into 1 frame, and finally obtaining a third image set comprising 30 frames of images. That is, 240 images within 1 second are subjected to temporal filtering to obtain a set containing 30 images within 1 second, that is, a first image set acquired at a first frame rate of 240fps is subjected to temporal filtering to obtain a third image set at a second frame rate of 30 fps.
S203, performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate.
In one possible embodiment, the third frame rate is included in the recording parameters. The terminal device may determine the number of frames of the intermediate frame image from a ratio of the third frame rate to the first frame rate, and predict an intermediate frame image between images of every two adjacent frames in the second image set based on the number of frames of the intermediate frame image. Then, for an intermediate frame image between every two adjacent frames of images, the intermediate frame image is inserted between the two adjacent frames of images, and a fourth image set at a third frame rate is obtained.
Alternatively, the terminal device may predict an intermediate frame image between images of every two adjacent frames in the second image set using a calling algorithm. The algorithm here may be an algorithm for predicting the motion trajectory of an object in an image, such as a motion vector algorithm.
In one possible embodiment, if a larger number of frames of intermediate frame images are inserted between two adjacent frames, the change between adjacent images in the slow motion video is small, and the user can easily visually ignore the change between the images, so that the slow motion video feels stuck and the slow motion video is not smooth. In order to ensure the fluency of the slow-motion video, the terminal device may determine the third frame rate according to the first frame rate and a preset threshold, so that a ratio of the third frame rate to the first frame rate is less than or equal to the preset threshold. Taking the example described in the above step S202, the first frame rate is 240fps, and assuming that the preset threshold is 4, the third frame rate can be determined to be 480fps, 720fps, 960fps, and so on according to the first frame rate of 240fps and the preset threshold of 4.
Referring to fig. 4, when the first frame rate is 240fps and the third frame rate is 960fps, the terminal device calculates the ratio of the third frame rate to the first frame rate to be 4, and then subtracts the base number 1 from the ratio of the third frame rate to the first frame rate to obtain the number of intermediate frame images to be 3. That is, for every two adjacent frame images, the terminal device may call an algorithm to predict a 3-frame inter-frame image between the two adjacent frame images. As shown in fig. 4, the nth frame image and the nth' frame image are images of exemplary two adjacent frames in the second image set. The terminal device calls an algorithm to predict the 3 frame intermediate frame images in the two adjacent frame images, namely, the new 3 frame images are generated by predicting the motion of the main object between the Nth frame image and the Nth' frame image, such as the 1 st frame intermediate frame image, the 2 nd frame intermediate frame image and the 3 rd frame intermediate frame image in fig. 4. In the example shown in fig. 4, the main object is a vehicle, which has been predicted and represented in the intermediate frame image based on the change in the position of the vehicle in the nth frame image and the nth' frame image.
Assuming that the second time period is 0.25 seconds, the terminal device obtains a second image set comprising 60 images at the first frame rate during the second time period. For every two adjacent frame images in the second image set, a 3-frame intermediate frame image can be obtained, the 3-frame intermediate frame image is inserted between the two adjacent frame images, and the terminal device obtains a fourth image set comprising 240 frames. That is, after 60 frames of images within 0.25 second are subjected to frame interpolation processing, 240 frames of images within 0.25 second are obtained, that is, the second image set acquired at the first frame rate of 240fps is subjected to frame interpolation processing, so as to obtain the fourth image set at the third frame rate of 960 fps.
And S204, respectively encoding the third image set and the fourth image set at a second frame rate to obtain a slow motion video.
In a possible implementation manner, the terminal device obtains the encoding protocol, then encodes the third image set at the second frame rate according to the encoding protocol, and finally encodes the fourth image set at the second frame rate to obtain the final slow motion video. And the position of the video coded by the third image set in the slow motion video is positioned before the position of the video coded by the fourth image set in the slow motion video.
As mentioned above, the terminal device performs temporal filtering on the first image set within 1 second to obtain a third image set containing 30 frames of images, and performs frame interpolation on the second image set within 0.25 second to obtain a fourth image set containing 240 frames of images. The terminal device encodes the third image set and the fourth image set respectively at a second frame rate (i.e. 30fps), and slow motion video of 9 seconds can be obtained. Wherein, in the slow motion video of the 1 st second, the motion rate of the main object in the slow motion video is equal to the actual motion rate of the main object. Within the slow motion video of seconds 2 to 9, the motion rate of the main object within the slow motion video is smaller than the actual motion rate of the main object.
When the image set with the low frame rate acquired by the terminal equipment is processed, the image set with the high frame rate can be obtained by predicting the intermediate frame image between the images of two adjacent frames and inserting the intermediate frame image between the images of the two adjacent frames. And encoding and outputting the image set with the high frame rate to obtain the slow motion video with the slow rate. In addition, before the frame interpolation processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after the temporal filtering processing to obtain a video with a constant rate. Thus, the final slow motion video includes both constant rate video and slower rate video, with the effect of varying rates. Moreover, the fluency of the video obtained by time-domain filtering can be ensured by weighted fusion. Therefore, through the scheme of the embodiment of the application, a complete and smooth slow motion video can be obtained through common terminal equipment. Therefore, the user can record the required slow motion video more freely, and the user experience is improved.
As can be seen from the above description of the embodiment of the method shown in fig. 2, the trigger event shown in fig. 2 may be to determine that there is a moving object in the captured picture based on the motion data of the pixel points of the multi-frame preview image. In order to effectively distinguish the change of the shooting visual angle from the moving object existing in the shot picture, a motion vector method can be adopted to calculate the motion data of the pixel points of the multi-frame preview image so as to determine whether the moving object exists in the picture. Based on the above, the embodiment of the present application further provides another video processing method; referring to fig. 5, the video processing method may include the following steps S501 to S507:
s501, sampling an original image recorded in a shooting picture at a first frame rate to obtain a multi-frame preview image at a second frame rate.
In a possible implementation manner, after the terminal device enters the preview mode, the terminal device may sample an original image recorded at a first frame rate in a captured picture to obtain a multi-frame preview image at a second frame rate. The sampling is a method of sampling an original image with a sampling frequency to obtain a preview image that needs motion detection. The sampling frequency is determined according to a ratio of the first frame rate and the second frame rate. For example, when the first frame rate is 240fps and the second frame rate is 30fps, the sampling frequency may be determined to be 8 according to a ratio of the first frame rate to the second frame rate, which indicates that one frame needs to be selected as a preview image every 8 frames in the original image, and then, the 240 frames of images acquired by the terminal device within 1 second are sampled to obtain 30 frames of images, that is, the original image recorded at the first frame rate is converted into a multi-frame preview image at the second frame rate through sampling.
In one possible implementation, the multi-frame preview image at the second frame rate can be further previewed and output through a display screen of the terminal device.
And S502, calculating the motion data of each pixel point of the multi-frame preview image by adopting a motion vector method.
In a feasible implementation manner, the terminal device may obtain a pixel point of a multi-frame preview image in the shooting picture, and perform vector operation on a pixel coordinate of the pixel point in the multi-frame preview image to obtain a displacement vector of the pixel point.
The multi-frame preview image may be a preview image adjacent to each other every two frames, or may be a preview image of consecutive n frames. The terminal equipment can find the same pixel point from the multi-frame preview image based on the block matching algorithm. For example, the multi-frame preview image is set as a continuous 3-frame preview image, and the continuous 3-frame preview image is respectively the n-2 th frame preview image, the n-1 th frame preview image and the nth frame preview image. Selecting a pixel A in the n frame preview image, and finding the pixel A with the highest matching degree with the pixel A in the n-1 frame preview image according to a block matching algorithm1And finding out the pixel A with the highest matching degree with the pixel A in the n-2 th frame preview image2. Obtaining pixel point A, pixel point A1And pixel point A2And carrying out vector operation on the three pixel coordinates to obtain the displacement vector of the pixel point A.
And S503, judging whether a moving object exists in the shooting picture according to the motion data of each pixel point of the multi-frame preview image.
If the shooting visual angle is changed (for example, the shooting visual angle is shifted due to hand shake of a user), the motion data of each pixel point in the shooting picture are the same. Therefore, in a possible implementation manner, when the displacement vectors of the respective pixel points are the same, the terminal device may determine that there is no moving object in the captured picture. When the displacement vector of each pixel point has a plurality of values, the displacement vectors of the static object and the moving object in the shooting picture are different, and the terminal equipment can determine that the moving object exists in the shooting picture.
In a possible implementation manner, if the terminal device determines that a moving object exists in the captured picture, the terminal device may be triggered to execute step S504. If the terminal device determines that no moving object exists in the shooting picture, the terminal device will repeatedly execute step S502 and step S503. In connection with the example of step S502, if the displacement vectors of the pixels in the nth frame preview image are the same, the terminal device may determine the displacement vectors of the pixels in the (n + 1) th frame preview image according to the (n-1) th frame preview image, the nth frame preview image, and the (n + 1) th frame preview image. And judging whether a moving object exists in the shot picture according to the displacement vector of each pixel point in the n +1 th frame preview image.
S504, recording the shot picture at a first frame rate, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period.
S505, performing time domain filtering on the first image set to obtain a third image set at a second frame rate; the second frame rate is less than the first frame rate.
S506, performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
and S507, respectively encoding the third image set and the fourth image set at a second frame rate to obtain a slow motion video.
When the image set with the low frame rate acquired by the terminal equipment is processed, the image set with the high frame rate can be obtained by predicting the intermediate frame image between the images of two adjacent frames and inserting the intermediate frame image between the images of the two adjacent frames. And encoding and outputting the image set with the high frame rate to obtain the slow motion video with the slow rate. And before the frame interpolation processing, the terminal device additionally performs time-domain filtering processing on one image set, and encodes the image set subjected to the time-domain filtering processing to obtain a video with a constant rate. Thus, the final slow motion video includes both constant rate video and slower rate video, with the effect of varying rates. Moreover, the fluency of the video obtained by time-domain filtering can be ensured by weighted fusion. Therefore, through the scheme of the embodiment of the application, a complete and smooth slow motion video can be obtained through common terminal equipment. In addition, the terminal equipment calculates the motion data of the pixel points of the multi-frame preview image by using a motion vector algorithm to determine whether a moving object exists in the picture. The change of the shooting visual angle and the object moving in the shooting picture caused by hand shake can be effectively distinguished, and the recording experience of a user is improved.
A video processing apparatus provided in an embodiment of the present application is described in detail below, and fig. 6 is a schematic structural diagram of the video processing apparatus provided in the embodiment of the present application. As shown in fig. 6, the apparatus described in this embodiment may include a recording unit 601, a time domain filtering unit 602, an interpolation processing unit 603, and an encoding unit 604.
The recording unit 601 is configured to record a captured image at a first frame rate when it is determined that a moving object exists in the captured image based on motion data of a pixel point of a multi-frame preview image, and obtain a first image set of a first time period and a second image set of a second time period respectively, where the first time period is a time period before the second time period;
a time domain filtering unit 602, configured to perform time domain filtering on the first image set to obtain a third image set at a second frame rate; the second frame rate is less than the first frame rate;
an interpolation processing unit 603, configured to perform interpolation processing on the second image set to obtain a fourth image set at a third frame rate; the third frame rate is greater than the first frame rate;
an encoding unit 604, configured to encode the third image set and the fourth image set respectively at the second frame rate to obtain a slow motion video.
In some possible embodiments, the apparatus further includes a calculating unit 605, where the calculating unit 605 is configured to sample an original image recorded at the first frame rate in the captured picture to obtain a plurality of frames of preview images at the second frame rate;
and calculating the motion data of each pixel point of the multi-frame preview image by adopting a motion vector method.
In some possible embodiments, the calculating unit 605 calculates the motion data of each pixel point by using a motion vector method, including:
acquiring a pixel point of the multi-frame preview image in the shooting picture;
and carrying out vector operation on the pixel coordinates of the pixel point in the multi-frame preview image to obtain the displacement vector of the pixel point.
In some possible embodiments, when the displacement vectors of the respective pixel points are the same, no moving object exists in the captured picture;
and when the displacement vector of each pixel point has a plurality of values, a moving object exists in the shot picture.
In some possible embodiments, the temporal filtering unit 602 is configured to perform temporal filtering on the first image set to obtain a third image set at a second frame rate, and includes:
continuously sampling the first image set according to continuous sampling parameters to obtain a plurality of frames of images to be fused, wherein the continuous sampling parameters comprise continuous sampling frequency and continuous sampling frame number, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
and fusing the plurality of frames of images to be fused by the continuous sampling frame number to obtain a third image set at the second frame rate.
In some possible embodiments, the frame interpolation processing unit 603 is configured to perform frame interpolation processing on the second image set to obtain a fourth image set at a third frame rate, and includes:
predicting intermediate frame images between images of every two adjacent frames in the second image set, the number of the intermediate frame images being determined based on the first frame rate and the third frame rate;
and for an intermediate frame image between every two adjacent frames of images, inserting the intermediate frame image between the two adjacent frames of images to obtain a fourth image set of the third frame rate.
In some possible embodiments, the apparatus further includes a configuration unit 606 configured to configure recording parameters, where the recording parameters include the first frame rate, the second frame rate, and the third frame rate.
In this embodiment of the present application, when it is determined that a moving object exists in a captured picture based on motion data of pixel points of multiple frames of preview images, a recording unit 601 records the captured picture at a first frame rate, and obtains a first image set of a first time period and a second image set of a second time period, respectively, where the first time period is a time length before the second time period; then, the time domain filtering unit 602 performs time domain filtering on the first image set to obtain a third image set at a second frame rate; the second frame rate is less than the first frame rate; then, the frame interpolation processing unit 603 performs frame interpolation processing on the second image set to obtain a fourth image set at a third frame rate; the third frame rate is greater than the first frame rate; finally, the encoding unit 604 encodes the third image set and the fourth image set respectively at the second frame rate to obtain a slow motion video. When the image set with the low frame rate acquired by the terminal equipment is processed, the image set with the high frame rate can be obtained by predicting the intermediate frame image between the images of two adjacent frames and inserting the intermediate frame image between the images of two adjacent frames. And encoding and outputting the image set with the high frame rate to obtain the slow motion video with the slow rate. In addition, before the frame interpolation processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after the temporal filtering processing to obtain a video with a constant rate. Thus, the final slow motion video includes both constant rate video and slower rate video, with the effect of varying rates. Moreover, the fluency of the video obtained by time-domain filtering can be ensured by weighted fusion. Therefore, through the scheme of the embodiment of the application, a complete and smooth slow motion video can be obtained through common terminal equipment.
It is to be understood that the functions of each unit of this embodiment may be specifically implemented according to the method in fig. 2 or fig. 5 in the foregoing embodiment, and the specific implementation process may refer to the description related to the method embodiment in fig. 2 or fig. 5, which is not described herein again.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. The terminal device in the present embodiment shown in fig. 7 may include: a processor 701 and a memory 702. The processor 701 and the memory 702 are connected by a bus 703. The memory 702 is used to store a computer program comprising program instructions, and the processor 701 is used to execute the program instructions stored by the memory 702.
In the embodiment of the present application, the processor 701 executes the executable program code in the memory 702 to perform the following operations:
when a moving object exists in a shot picture determined based on motion data of pixel points of multiple frames of preview images, recording the shot picture at a first frame rate, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period;
performing time domain filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is less than the first frame rate;
performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
and respectively encoding the third image set and the fourth image set at the second frame rate to obtain a slow motion video.
In some possible implementations, the processor 701 is further configured to:
sampling an original image recorded in the shooting picture at the first frame rate to obtain a multi-frame preview image at the second frame rate;
and calculating the motion data of each pixel point of the multi-frame preview image by adopting a motion vector method.
In some possible embodiments, the processor 701 calculates the motion data of each pixel point by using a motion vector method, including:
acquiring a pixel point of the multi-frame preview image in the shooting picture;
and carrying out vector operation on the pixel coordinates of the pixel point in the multi-frame preview image to obtain the displacement vector of the pixel point.
In some possible embodiments, when the displacement vectors of the respective pixel points are the same, no moving object exists in the captured picture; and when the displacement vector of each pixel point has a plurality of values, a moving object exists in the shot picture.
In some possible embodiments, the processor 701 performs temporal filtering on the first image set to obtain a third image set at a second frame rate, including:
continuously sampling the first image set according to continuous sampling parameters to obtain a plurality of frames of images to be fused, wherein the continuous sampling parameters comprise continuous sampling frequency and continuous sampling frame number, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
and fusing the plurality of frames of images to be fused by the continuous sampling frame number to obtain a third image set at the second frame rate.
With reference to the first aspect, in some possible embodiments, the processor 701 performs frame interpolation on the second image set to obtain a fourth image set at a third frame rate, including:
predicting intermediate frame images between images of every two adjacent frames in the second image set, the number of the intermediate frame images being determined based on the first frame rate and the third frame rate;
and for an intermediate frame image between every two adjacent frames of images, inserting the intermediate frame image between the two adjacent frames of images to obtain a fourth image set of the third frame rate.
In some possible implementations, the processor 701 is further configured to:
configuring recording parameters, wherein the recording parameters comprise the first frame rate, the second frame rate and the third frame rate.
It should be understood that, in the embodiment of the present Application, the Processor 701 may be a Central Processing Unit (CPU), and the Processor 701 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 702 may include both read-only memory and random access memory, and provides instructions and data to the processor 701. A portion of the memory 702 may also include a non-volatile random access memory, which may store a first frame rate, a second frame rate, a third frame rate, and so on.
In a specific implementation, the processor 701 and the memory 702 described in this embodiment of the present application may execute the implementation described in the flow of the video processing method provided in fig. 2 or fig. 5 in this embodiment of the present application, and may also execute the implementation described in the video processing apparatus provided in fig. 6 in this embodiment of the present application, which is not described herein again.
In the embodiment of the application, when an image set with a low frame rate acquired by a terminal device is processed, an intermediate frame image between images of two adjacent frames can be predicted, and the intermediate frame image is inserted between the two adjacent frame images to obtain an image set with a high frame rate. And encoding and outputting the image set with the high frame rate to obtain the slow motion video with the slow rate. In addition, before the frame interpolation processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after the temporal filtering processing to obtain a video with a constant rate. Thus, the final slow motion video includes both constant rate video and slower rate video, with the effect of varying rates. Moreover, the fluency of the video obtained by time-domain filtering can be ensured by weighted fusion. Therefore, through the scheme of the embodiment of the application, a complete and smooth slow motion video can be obtained through common terminal equipment.
The embodiment of the application also provides a computer readable storage medium. The computer readable storage medium stores a computer program comprising program instructions that, when executed by a processor, perform the steps performed in fig. 2 or fig. 5 of the above-described embodiments of the video processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Claims (10)
1. A video processing method, comprising:
when a moving object exists in a shot picture determined based on motion data of pixel points of multiple frames of preview images, recording the shot picture at a first frame rate, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period;
performing time domain filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is less than the first frame rate;
performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
and respectively encoding the third image set and the fourth image set at the second frame rate to obtain a slow motion video.
2. The method of claim 1, wherein the method further comprises:
sampling an original image recorded in the shooting picture at the first frame rate to obtain a multi-frame preview image at the second frame rate;
and calculating the motion data of each pixel point of the multi-frame preview image by adopting a motion vector method.
3. The method as claimed in claim 2, wherein said calculating the motion data of said each pixel point by using a motion vector method comprises:
acquiring a pixel point of the multi-frame preview image in the shooting picture;
and carrying out vector operation on the pixel coordinates of the pixel point in the multi-frame preview image to obtain the displacement vector of the pixel point.
4. The method according to claim 3, wherein when the displacement vectors of the respective pixel points are the same, no moving object exists in the photographed image;
and when the displacement vector of each pixel point has a plurality of values, a moving object exists in the shot picture.
5. The method of claim 1, wherein temporally filtering the first image set to obtain a third image set at a second frame rate comprises:
continuously sampling the first image set according to continuous sampling parameters to obtain a plurality of frames of images to be fused, wherein the continuous sampling parameters comprise continuous sampling frequency and continuous sampling frame number, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
and fusing the plurality of frames of images to be fused by the continuous sampling frame number to obtain a third image set at the second frame rate.
6. The method of claim 1, wherein said interpolating said second image set to obtain a fourth image set at a third frame rate comprises:
predicting intermediate frame images between images of every two adjacent frames in the second image set, the number of the intermediate frame images being determined based on the first frame rate and the third frame rate;
and for an intermediate frame image between every two adjacent frames of images, inserting the intermediate frame image between the two adjacent frames of images to obtain a fourth image set of the third frame rate.
7. The method of claim 1, wherein the method further comprises:
configuring recording parameters, wherein the recording parameters comprise the first frame rate, the second frame rate and the third frame rate.
8. A video processing apparatus, comprising:
the recording unit is used for recording a shooting picture at a first frame rate when a moving object exists in the shooting picture based on motion data of pixel points of multi-frame preview images, and respectively obtaining a first image set of a first time period and a second image set of a second time period, wherein the first time period is the time length before the second time period;
the time domain filtering unit is used for performing time domain filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is less than the first frame rate;
the frame interpolation processing unit is used for performing frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
and the encoding unit is used for encoding the third image set and the fourth image set respectively at the second frame rate to obtain a slow motion video.
9. A terminal device comprising a processor and a memory, the processor being coupled to the memory, wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform a video processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the video processing method of any of the preceding claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011353666.7A CN112532880B (en) | 2020-11-26 | 2020-11-26 | Video processing method and device, terminal equipment and storage medium |
PCT/CN2021/126794 WO2022111198A1 (en) | 2020-11-26 | 2021-10-27 | Video processing method and apparatus, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011353666.7A CN112532880B (en) | 2020-11-26 | 2020-11-26 | Video processing method and device, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112532880A CN112532880A (en) | 2021-03-19 |
CN112532880B true CN112532880B (en) | 2022-03-11 |
Family
ID=74994094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011353666.7A Active CN112532880B (en) | 2020-11-26 | 2020-11-26 | Video processing method and device, terminal equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112532880B (en) |
WO (1) | WO2022111198A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112532880B (en) * | 2020-11-26 | 2022-03-11 | 展讯通信(上海)有限公司 | Video processing method and device, terminal equipment and storage medium |
CN113067994B (en) * | 2021-03-31 | 2022-08-19 | 联想(北京)有限公司 | Video recording method and electronic equipment |
CN113099132B (en) * | 2021-04-19 | 2023-03-21 | 深圳市帧彩影视科技有限公司 | Video processing method, video processing apparatus, electronic device, storage medium, and program product |
CN114390236A (en) * | 2021-12-17 | 2022-04-22 | 云南腾云信息产业有限公司 | Video processing method, video processing device, computer equipment and storage medium |
CN117014686A (en) * | 2022-04-29 | 2023-11-07 | 荣耀终端有限公司 | Video processing method and electronic equipment |
CN115835035A (en) * | 2022-11-17 | 2023-03-21 | 歌尔科技有限公司 | Image frame interpolation method, device and equipment and computer readable storage medium |
CN116684668B (en) * | 2023-08-03 | 2023-10-20 | 湖南马栏山视频先进技术研究院有限公司 | Self-adaptive video frame processing method and playing terminal |
CN117315574B (en) * | 2023-09-20 | 2024-06-07 | 北京卓视智通科技有限责任公司 | Blind area track completion method, blind area track completion system, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105282475A (en) * | 2014-06-27 | 2016-01-27 | 澜起科技(上海)有限公司 | Mobile subtitle detection and compensation method and system |
CN108184165A (en) * | 2017-12-28 | 2018-06-19 | 广东欧珀移动通信有限公司 | Video broadcasting method, electronic device and computer readable storage medium |
CN110086905A (en) * | 2018-03-26 | 2019-08-02 | 华为技术有限公司 | A kind of kinescope method and electronic equipment |
CN110636375A (en) * | 2019-11-11 | 2019-12-31 | RealMe重庆移动通信有限公司 | Video stream processing method and device, terminal equipment and computer readable storage medium |
CN111225150A (en) * | 2020-01-20 | 2020-06-02 | Oppo广东移动通信有限公司 | Method for processing interpolation frame and related product |
CN111586409A (en) * | 2020-05-14 | 2020-08-25 | Oppo广东移动通信有限公司 | Method and device for generating interpolation frame, electronic equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019062427A (en) * | 2017-09-27 | 2019-04-18 | キヤノン株式会社 | Imaging device, control method therefor, and program |
CN108600615A (en) * | 2018-04-04 | 2018-09-28 | 深圳市语图科技有限公司 | A kind of slow motion kinescope method and device |
US10764530B2 (en) * | 2018-10-04 | 2020-09-01 | Samsung Electronics Co., Ltd. | Method and system for recording a super slow motion video in a portable electronic device |
CN110933315B (en) * | 2019-12-10 | 2021-09-07 | Oppo广东移动通信有限公司 | Image data processing method and related equipment |
CN111277779B (en) * | 2020-03-05 | 2022-05-06 | Oppo广东移动通信有限公司 | Video processing method and related device |
CN112532880B (en) * | 2020-11-26 | 2022-03-11 | 展讯通信(上海)有限公司 | Video processing method and device, terminal equipment and storage medium |
-
2020
- 2020-11-26 CN CN202011353666.7A patent/CN112532880B/en active Active
-
2021
- 2021-10-27 WO PCT/CN2021/126794 patent/WO2022111198A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105282475A (en) * | 2014-06-27 | 2016-01-27 | 澜起科技(上海)有限公司 | Mobile subtitle detection and compensation method and system |
CN108184165A (en) * | 2017-12-28 | 2018-06-19 | 广东欧珀移动通信有限公司 | Video broadcasting method, electronic device and computer readable storage medium |
CN110086905A (en) * | 2018-03-26 | 2019-08-02 | 华为技术有限公司 | A kind of kinescope method and electronic equipment |
CN110636375A (en) * | 2019-11-11 | 2019-12-31 | RealMe重庆移动通信有限公司 | Video stream processing method and device, terminal equipment and computer readable storage medium |
CN111225150A (en) * | 2020-01-20 | 2020-06-02 | Oppo广东移动通信有限公司 | Method for processing interpolation frame and related product |
CN111586409A (en) * | 2020-05-14 | 2020-08-25 | Oppo广东移动通信有限公司 | Method and device for generating interpolation frame, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112532880A (en) | 2021-03-19 |
WO2022111198A1 (en) | 2022-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112532880B (en) | Video processing method and device, terminal equipment and storage medium | |
WO2021175055A1 (en) | Video processing method and related device | |
CN109922372B (en) | Video data processing method and device, electronic equipment and storage medium | |
KR101526081B1 (en) | System and method for controllably viewing digital video streams captured by surveillance cameras | |
EP3136391B1 (en) | Method, device and terminal device for video effect processing | |
CN110060215B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110769158B (en) | Enhanced image capture | |
US8773542B2 (en) | Apparatus and method for adaptive camera control method based on predicted trajectory | |
CN109118430B (en) | Super-resolution image reconstruction method and device, electronic equipment and storage medium | |
CN108322650B (en) | Video shooting method and device, electronic equipment and computer readable storage medium | |
CN113067994B (en) | Video recording method and electronic equipment | |
CN113691737B (en) | Video shooting method and device and storage medium | |
KR100719841B1 (en) | Method for creation and indication of thumbnail view | |
WO2021057359A1 (en) | Image processing method, electronic device, and readable storage medium | |
CN113099272A (en) | Video processing method and device, electronic equipment and storage medium | |
US20140082208A1 (en) | Method and apparatus for multi-user content rendering | |
CN111444909B (en) | Image data acquisition method, terminal equipment and medium | |
US11146762B2 (en) | Methods and systems for reconstructing a high frame rate high resolution video | |
CN113315903B (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN113506320A (en) | Image processing method and device, electronic equipment and storage medium | |
CN108933881B (en) | Video processing method and device | |
US20130343728A1 (en) | Imaging device, information processing device, and non-transitory computer readable medium storing program | |
CN113506323B (en) | Image processing method and device, electronic equipment and storage medium | |
KR101567668B1 (en) | Smartphones camera apparatus for generating video signal by multi-focus and method thereof | |
CN113727036A (en) | Video processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |