WO2022111198A1 - 视频处理方法、装置、终端设备及存储介质 - Google Patents

视频处理方法、装置、终端设备及存储介质 Download PDF

Info

Publication number
WO2022111198A1
WO2022111198A1 PCT/CN2021/126794 CN2021126794W WO2022111198A1 WO 2022111198 A1 WO2022111198 A1 WO 2022111198A1 CN 2021126794 W CN2021126794 W CN 2021126794W WO 2022111198 A1 WO2022111198 A1 WO 2022111198A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame rate
image set
frame
image
images
Prior art date
Application number
PCT/CN2021/126794
Other languages
English (en)
French (fr)
Inventor
姬弘桢
张鑫
陈欢
王晓哲
罗小伟
张显坤
Original Assignee
展讯通信(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 展讯通信(上海)有限公司 filed Critical 展讯通信(上海)有限公司
Publication of WO2022111198A1 publication Critical patent/WO2022111198A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present application relates to the technical field of video processing, and in particular, to a video processing method, apparatus, terminal device and storage medium.
  • Slow-motion video refers to high-speed photography video, that is, a video that contains instantaneous movements that are invisible to the naked eye in a short period of time, such as the movement state of a bullet flying out of the chamber; another example, the action process of a football player shooting, and so on.
  • high-speed photography video that is, a video that contains instantaneous movements that are invisible to the naked eye in a short period of time, such as the movement state of a bullet flying out of the chamber; another example, the action process of a football player shooting, and so on.
  • ordinary users usually cannot record slow-motion videos. Therefore, how to use a terminal device to record slow-motion video is an important research topic in video processing technology.
  • the embodiments of the present application provide a video processing method, apparatus, terminal device, and storage medium.
  • the embodiments of the present application process an image set recorded by the terminal device, and can obtain a smooth and complete slow-motion video.
  • the present application provides a video processing method, the method comprising:
  • the shooting picture is recorded at the first frame rate, and the first image set of the first time period and the second time period of the first image set are obtained respectively.
  • a second set of images where the first time period is a time period before the second time period;
  • the third image set and the fourth image set are encoded at the second frame rate, respectively, to obtain a slow-motion video.
  • an embodiment of the present application provides a video processing apparatus, and the apparatus includes:
  • the recording unit is configured to, when it is determined based on the motion data of the pixels of the multi-frame preview images that there is a moving object in the shooting picture, record the shooting picture at a first frame rate, and obtain the first image set and a second set of images for a second time period, the first time period being a time period preceding the second time period;
  • a temporal filtering unit configured to perform temporal filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is smaller than the first frame rate;
  • a frame insertion processing unit configured to perform frame insertion processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
  • the encoding unit is configured to encode the third image set and the fourth image set respectively at the second frame rate to obtain a slow-motion video.
  • an embodiment of the present application provides a terminal device, including a processor and a memory, where the processor and the memory are connected, wherein the memory is used to store program codes, and the processor is used to call the program codes to execute the first The video processing method described in one aspect.
  • an embodiment of the present application provides a chip, which is used for:
  • the shooting picture is recorded at the first frame rate, and the first image set of the first time period and the second time period of the first image set are obtained respectively.
  • a second set of images the first time period being a time period preceding the second time period;
  • the third image set and the fourth image set are encoded at the second frame rate, respectively, to obtain a slow-motion video.
  • an embodiment of the present application provides a modular device, the modular device includes a processor and a communication interface, the processor is connected to the communication interface, the communication interface is used for sending and receiving signals, and the processor is used for:
  • the shooting picture is recorded at the first frame rate, and the first image set in the first time period and the first image set in the second time period are obtained respectively.
  • Two image sets, the first time period is the time period before the second time period;
  • the third image set and the fourth image set are encoded at the second frame rate, respectively, to obtain a slow-motion video.
  • an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the video processing method described in the first aspect.
  • an intermediate frame image between images of two adjacent frames can be predicted, and an intermediate frame image can be inserted between the two adjacent frame images. to obtain high frame rate image sets.
  • a slow-motion video with a slow rate will be obtained.
  • the terminal device before performing frame insertion processing, the terminal device also performs temporal filtering processing on an image set, and encodes the image set after temporal filtering processing to obtain a video with a constant rate, so that the final slow motion video includes Constant-rate video and slow-rate video with a rate-changing effect.
  • the weighted fusion can ensure the smoothness of the video obtained by temporal filtering. Therefore, through the solutions of the embodiments of the present application, a complete and smooth slow-motion video can be obtained through a common terminal device. The user can record the desired slow-motion video more freely and improve the user experience.
  • FIG. 1 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a video processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a time-domain filtering provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a frame insertion process provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of another video processing method provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a video processing apparatus provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • image capture devices in portable terminal devices on the market capture video images at recording frame rates of 30 frames per second (Frames Per Second, fps), 60fps, 120fps, and 240fps.
  • fps frames per second
  • fps frames per second
  • 60fps 60fps
  • 120fps 120fps
  • 240fps 240fps
  • slow-motion video images often require image capture devices with high frame rates
  • ordinary terminal devices cannot record slow-motion video images.
  • the terminal device needs to be configured with a dedicated image sensor, such as a fast readout sensor (FRS).
  • FFS fast readout sensor
  • the cost of a terminal device equipped with a dedicated image sensor is very high, and ordinary users cannot record slow-motion videos using ordinary terminal devices. Therefore, in order to solve the above problems, the embodiments of the present application provide a video processing system and a video processing solution that can complete slow-motion video production in a common terminal device.
  • the terminal devices may include, but are not limited to, smart phones, laptop computers, tablet computers, desktop computers, and the like.
  • the terminal device may include: an image acquisition module 101 , a hardware driving module 102 , an image processing module 103 , and an encoding module 104 .
  • the image acquisition module 101 is used for recording the image in the shooting screen, and the image acquisition module 101 may be an image sensor (sensor).
  • the hardware driver module 102 may be an image processing driver (Image Signal Processing driver, ISP driver).
  • the hardware driver module 102 is used to control the recording of the image capture module 101 , for example, the hardware driver module 102 controls the image capture module 101 to record an image set for a period of time at a first frame rate.
  • the image processing module 103 may be a hardware abstraction layer (HAL layer), and the image processing module 103 is used to process the recorded image set.
  • the encoding module 104 may be an encoder supporting the VSP protocol, and may be used to encode the image set processed by the image processing module 103 .
  • the video processing solution proposed in the embodiment of the present application specifically includes: when the image processing module 103 determines that there is a moving object in the shooting picture based on the motion data of the pixels of the multi-frame preview images, the image processing module 103 performs the hardware processing on the hardware.
  • the drive module 102 configures recording-related parameters (such as the first frame rate, the second frame rate, the third frame rate, etc.); the hardware drive module 102 controls the image acquisition module 101 to record the captured images at the first frame rate, and obtains the first frame rate respectively.
  • the image set is subjected to frame interpolation processing to obtain a fourth image set with a third frame rate; finally, the encoding module 104 respectively encodes the third image set and the fourth image set to obtain a slow-motion video.
  • the video processing solution proposed in the embodiments of the present application converts the collected low frame rate image set into a fourth image set with a high frame rate by using frame interpolation processing, and performs temporal filtering processing before frame interpolation processing , so that the slow motion video has the effect of rate change, and temporal filtering can ensure the smoothness of the rate change in the slow motion video. Therefore, a complete and smooth slow-motion video can be recorded in the terminal device through the solutions provided by the embodiments of the present application.
  • the video processing method can be executed by the above-mentioned terminal device.
  • the video processing method may include S201-S204:
  • S201 When it is determined based on the motion data of the pixels of the multi-frame preview images that there is a moving object in the shooting picture, record the shooting picture at a first frame rate, and obtain a first image set and a second time period of the first time period respectively. A second set of images for the segment, where the first time segment is the length of time preceding the second time segment.
  • the captured picture is recorded at a first frame rate, and the first image set in the first time period and the second image set in the second time period are obtained after the terminal device detects a trigger event. implemented.
  • the motion detection function of the terminal device has two states: the motion detection state is turned on and the motion detection state is turned off.
  • the trigger event may be that the terminal device detects that the user clicks the record button.
  • the terminal device acquires the motion data of the pixels in the multi-frame preview image, and the trigger event may be that the terminal device based on the pixel in the multi-frame preview image The motion data of the point determines that there is a moving object in the shot.
  • the terminal device after the terminal device detects the trigger event, the terminal device obtains the recording parameters, and then records the shooting image according to the first frame rate included in the recording parameters, and obtains the first image set of the first time period respectively. and a second set of images for a second time period.
  • the recording parameters are configured in advance according to business requirements and experience, and the recording parameters may include the first frame rate, the second frame rate, and the third frame rate mentioned in this article.
  • the recording parameters may also include other parameters required for recording, such as the size of the shooting screen, the exposure time during shooting, etc., which are not limited here.
  • the first time period is the time period before the second time period.
  • the durations included in the first time period and the second time period may be set according to business requirements or experience.
  • the first time period can be set to 1 second
  • the second time period can be set to 0.25 seconds.
  • the duration of the second time period cannot exceed the duration threshold, which is determined according to the hardware device of the terminal device, and the hardware device includes the processing rate of the processor, the size of the memory, and the like.
  • the first frame rate may be directly the maximum recording frame rate of the terminal device.
  • the maximum recording frame rate of the end device is related to the hardware device (eg image sensor) of the end device. For example, if the maximum recording frame rate supported by the terminal device is 240fps, then the first frame rate is also 240fps.
  • the first frame rate can also be set according to experience and business requirements.
  • the first frame rate here may be less than or equal to the maximum recording frame rate of the terminal device. For example, assuming that the maximum recording frame rate supported by the terminal device is 240fps, the first frame rate may be 240fps, 120fps, and so on.
  • S202 Perform temporal filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is smaller than the first frame rate.
  • the recording parameters include the second frame rate.
  • the terminal device calculates the ratio of the first frame rate and the second frame rate, and determines the continuous sampling frequency according to the ratio of the first frame rate and the second frame rate.
  • the terminal device acquires the number of consecutive sampling frames, and continuously samples the first image set based on the continuous sampling frequency and the number of consecutive sampling frames to obtain multiple frames of images to be fused.
  • a third image set with a second frame rate is obtained by fusing the multiple frames of images to be fused with the consecutive sampling frame numbers.
  • the number of consecutive sampling frames may be set according to experience and service requirements.
  • the second frame rate may be equal to the encoding rate of the terminal device.
  • the encoding rate of the terminal device is 30fps, so the second frame rate may also be 30fps. It should be understood that with the emergence of new business scenarios, the second frame rate may also be a frame rate of other sizes. There is no limitation here.
  • the fusion of the images to be fused may refer to weighted fusion according to fusion parameters, where the fusion parameters may be set according to business and experience.
  • the terminal device determines that the frequency of continuous sampling is 8 according to the ratio of the first frame rate and the second frame rate. As shown in FIG. 3 , assuming that the first time period is 1 second, then the terminal device obtains a first image set including 240 frames of images at a first frame rate within the first time period. Assuming that the number of consecutively sampled frames is 3, the terminal device can continuously sample 3 frames every 8 frames in 240 frames of images to obtain 90 frames of image sets to be fused.
  • each of the 90 frames is fused into 1 frame in turn, and the third image set finally obtained contains 30 frames of images. That is to say, 240 frames of images in 1 second are filtered in time domain to obtain 30 frames of images in 1 second, that is, time domain filtering is performed on the first image set collected at the first frame rate of 240 fps, and the second frame rate of 30 fps is obtained. of the third image set.
  • S203 Perform frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate.
  • the recording parameters include a third frame rate.
  • the terminal device may determine the frame number of the intermediate frame image according to the ratio of the third frame rate and the first frame rate, and predict the intermediate frame image between the images of every two adjacent frames in the second image set based on the frame number of the intermediate frame image . Then, for every intermediate frame image between the images of two adjacent frames, the intermediate frame image is inserted between the images of the two adjacent frames to obtain a fourth image set at a third frame rate.
  • the terminal device may use an invocation algorithm to predict an intermediate frame image between images of every two adjacent frames in the second image set.
  • the algorithm here may be an algorithm for predicting the motion trajectory of an object in an image, such as a motion vector algorithm.
  • the terminal device may determine a third frame rate according to the first frame rate and a preset threshold, so that the ratio of the third frame rate to the first frame rate is less than or equal to the preset threshold.
  • the first frame rate is 240fps, assuming that the preset threshold is 4, then the third frame rate can be determined according to the first frame rate 240fps and the preset threshold 4, 480fps, 720fps, 960fps, etc.
  • the terminal device calculates the ratio of the third frame rate to the first frame rate as 4, and then compares the third frame rate to the first frame rate
  • the ratio of the ratio minus the base 1 to obtain the frame number of the intermediate frame image is 3. That is to say, for the images of every two adjacent frames, the terminal device may invoke an algorithm to predict the 3 frames of intermediate frame images between the images of the two adjacent frames.
  • the Nth frame image and the N'th frame image are exemplary images of two adjacent frames in the second image set.
  • the terminal device calls the algorithm to predict the 3 frames of intermediate frame images in the two adjacent frame images, that is, by predicting the movement of the main object between the Nth frame image and the N'th frame image, to generate new 3 frames of images, as shown in the figure 4 in the 1st frame intermediate frame image, the 2nd frame intermediate frame image and the 3rd frame intermediate frame image.
  • the main object is a vehicle, which has been predicted and represented in the intermediate frame images based on the position changes of the vehicle in the Nth frame image and the N'th frame image.
  • the terminal device obtains a second image set including 60 frames of images at the first frame rate within the second time period. For the images of every two adjacent frames in the second image set, 3 frames of intermediate frame images can be obtained, and the 3 frames of intermediate frame images are inserted between the images of the two adjacent frames, and the terminal device will obtain a fourth frame including 240 frames. image set. That is to say, after 60 frames of images within 0.25 seconds are subjected to frame interpolation processing, 240 frames of images are obtained within 0.25 seconds, that is, frame interpolation processing is performed on the second image set collected at the first frame rate of 240fps, and the third frame rate of 960fps is obtained. of the fourth image set.
  • S204 Encode the third image set and the fourth image set respectively at the second frame rate to obtain a slow-motion video.
  • the terminal device obtains the encoding protocol, then encodes the third image set at the second frame rate according to the encoding protocol, and finally encodes the fourth image set at the second frame rate to obtain the final slow Action video.
  • the position of the video encoded by the third image set in the slow motion video is before the position of the video encoded by the fourth image set in the slow motion video.
  • the terminal device obtains a third image set containing 30 frames of images through time domain filtering on the first image set within 1 second, and obtains a third image set containing 240 frames of images through frame interpolation processing on the second image set within 0.25 seconds. of the fourth image set.
  • the terminal device encodes the third image set and the fourth image set respectively at the second frame rate (ie, 30fps), and can obtain a 9-second slow-motion video.
  • the motion rate of the main object in the slow-motion video is equal to the actual motion rate of the main object.
  • the movement rate of the main object in the slow-motion video is smaller than the actual movement rate of the main object.
  • an intermediate frame image between images of two adjacent frames can be predicted, and the intermediate frame image can be inserted into the two adjacent frames
  • a high frame rate image set is obtained between images.
  • a slow-motion video with a slow rate will be obtained.
  • the terminal device before performing frame insertion processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after temporal filtering processing to obtain a video with a constant rate.
  • the final slow-motion video includes a video with a constant rate and a video with a slowed rate, with the effect of changing the rate.
  • the weighted fusion can ensure the smoothness of the video obtained by temporal filtering. Therefore, through the solutions of the embodiments of the present application, a complete and smooth slow-motion video can be obtained through a common terminal device. In this way, the user can record the desired slow-motion video more freely, thereby improving the user experience.
  • the trigger event shown in FIG. 2 may be based on motion data of pixels of multiple frames of preview images to determine that there is a moving object in the shooting picture.
  • the motion vector method can be used to calculate the motion data of the pixels of the multi-frame preview image to determine whether there is a moving object in the picture.
  • the embodiment of the present application also proposes another video processing method; as shown in FIG. 5 , the video processing method may include S501-S507:
  • S501 Obtain a multi-frame preview image of a second frame rate by sampling the original image recorded at the first frame rate in the shooting picture.
  • the terminal device may sample the original image recorded at the first frame rate in the shooting screen to obtain a multi-frame preview image at the second frame rate.
  • sampling refers to the method of sampling the original image by using the frequency to obtain the preview image that needs to be subjected to motion detection.
  • the sampling frequency is determined according to the ratio of the first frame rate and the second frame rate. For example, when the first frame rate is 240fps and the second frame rate is 30fps, the sampling frequency can be determined to be 8 according to the ratio of the first frame rate and the second frame rate, indicating that one frame needs to be selected every 8 frames in the original image.
  • a preview image 240 frames of images collected by the terminal device within 1 second are sampled to obtain 30 frames of images, that is, the original images recorded at the first frame rate are converted into multi-frame preview images at the second frame rate through sampling.
  • the multi-frame preview images at the second frame rate can also be previewed and output through the display screen of the terminal device.
  • S502 Calculate the motion data of each pixel of the multi-frame preview image by using the motion vector method.
  • the terminal device may acquire a pixel of a multi-frame preview image in a captured image, and perform a vector operation on the pixel coordinates of a pixel in the multi-frame preview image to obtain a displacement vector of a pixel.
  • the multiple frames of preview images may be adjacent preview images every two frames, or may be consecutive n frames of preview images.
  • the terminal device can find the same pixel from multiple frames of preview images based on the block matching algorithm. For example, it is assumed that the multiple frames of preview images are three consecutive frames of preview images, and the three consecutive frames of preview images are respectively the n-2th frame preview image, the n-1th frame preview image, and the nth frame preview image. Select any pixel A in the preview image of the nth frame, and then find the pixel A1 with the highest matching degree with the pixel point A in the preview image of the n-1th frame according to the block matching algorithm, and in the preview image of the n-2th frame Find the pixel A2 that matches the pixel A with the highest degree. Obtain the pixel coordinates of pixel point A, pixel point A1 and pixel point A2, and perform vector operation on these three pixel coordinates to obtain the displacement vector of pixel point A.
  • S503 Determine whether there is a moving object in the photographed image according to the motion data of each pixel of the multi-frame preview image.
  • the motion data of each pixel in the shooting image is the same. Therefore, in a feasible implementation manner, when the displacement vectors of each pixel point are the same, the terminal device can determine that there is no moving object in the photographed picture.
  • the displacement vector of each pixel has multiple values, the displacement vectors of the stationary object and the moving object in the photographing picture are different, and the terminal device can determine that there is a moving object in the photographing picture.
  • the terminal device may be triggered to perform step S504. If the terminal device determines that there is no moving object in the photographed image, the terminal device will repeat steps S502 and S503. Following the example described in step S502, if the displacement vectors of each pixel in the nth frame preview image are the same, the terminal device can preview the image according to the n-1th frame preview image, the nth frame preview image, and the n+1th frame preview image. The image determines the displacement vector of each pixel in the preview image of the n+1th frame. And according to the displacement vector of each pixel point in the preview image of the n+1th frame, it is judged whether there is a moving object in the shooting picture.
  • S504 Record the captured image at a first frame rate, and obtain a first image set in a first time period and a second image set in a second time period, where the first time period is the duration before the second time period.
  • S505 Perform temporal filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is smaller than the first frame rate.
  • S506 Perform frame insertion processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate.
  • S507 Encode the third image set and the fourth image set respectively at the second frame rate to obtain a slow-motion video.
  • an intermediate frame image between images of two adjacent frames can be predicted, and an intermediate frame image can be inserted between the two adjacent frame images. to obtain high frame rate image sets.
  • a slow-motion video with a slow rate will be obtained.
  • the terminal device before performing frame insertion processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after temporal filtering processing to obtain a video with a constant rate.
  • the final slow-motion video includes a video with a constant rate and a video with a slowed rate, with the effect of changing the rate.
  • the weighted fusion can ensure the smoothness of the video obtained by temporal filtering. Therefore, through the solutions of the embodiments of the present application, a complete and smooth slow-motion video can be obtained through a common terminal device.
  • the terminal device uses the motion vector algorithm to calculate the motion data of the pixels of the multi-frame preview images to determine whether there is a moving object in the picture. It can effectively distinguish the change of the shooting angle caused by the hand shake and the moving objects in the shooting screen, and improve the user's recording experience.
  • FIG. 6 is a schematic structural diagram of a video processing apparatus provided by an embodiment of the present application.
  • the apparatus described in this embodiment may include a recording unit 601 , a temporal filtering unit 602 , a frame insertion processing unit 603 , and an encoding unit 604 .
  • the recording unit 601 is configured to, when it is determined based on the motion data of the pixels of the multi-frame preview images that there is a moving object in the shooting picture, record the shooting picture at a first frame rate, and obtain a first image set of a first time period respectively and a second set of images for a second time period, the first time period being the duration before the second time period;
  • a temporal filtering unit 602 configured to perform temporal filtering on the first image set to obtain a third image set with a second frame rate; the second frame rate is smaller than the first frame rate;
  • a frame insertion processing unit 603, configured to perform frame insertion processing on the second image set to obtain a fourth image set with a third frame rate; the third frame rate is greater than the first frame rate;
  • the encoding unit 604 is configured to encode the third image set and the fourth image set respectively at the second frame rate to obtain a slow-motion video.
  • the apparatus further includes a computing unit 605, and the computing unit 605 is configured to obtain multi-frame preview images of the second frame rate by sampling the original image recorded at the first frame rate in the shooting picture;
  • the motion data of each pixel of the multi-frame preview image is calculated by the motion vector method.
  • the calculation unit 605 uses the motion vector method to calculate the motion data of each pixel, including:
  • the temporal filtering unit 602 is configured to perform temporal filtering on the first image set to obtain a third image set at the second frame rate, including:
  • the first image set is continuously sampled according to the continuous sampling parameter to obtain multiple frames of images to be fused.
  • the continuous sampling parameter includes the continuous sampling frequency and the number of continuous sampling frames, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
  • a third image set with a second frame rate is obtained by fusing the multiple frames of images to be fused with the consecutive sampling frame numbers.
  • the frame interpolation processing unit 603 is configured to perform frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate, including:
  • the intermediate frame image is inserted between the images of the two adjacent frames to obtain a fourth image set at a third frame rate.
  • the apparatus further includes a configuration unit 606 for configuring recording parameters, where the recording parameters include a first frame rate, a second frame rate and a third frame rate.
  • the recording unit 601 when it is determined that there is a moving object in the shooting picture based on the motion data of the pixels of the multi-frame preview images, records the shooting picture at the first frame rate, and obtains the first time period respectively.
  • the first image set and the second image set of the second time period, the first time period is the duration before the second time period; then the temporal filtering unit 602 performs temporal filtering on the first image set to obtain the second frame rate
  • the second frame rate is smaller than the first frame rate; then the frame interpolation processing unit 603 performs frame interpolation processing on the second image set to obtain a fourth image set with the third frame rate; the third frame rate is greater than the first frame rate Frame rate; finally, the encoding unit 604 encodes the third image set and the fourth image set respectively at the second frame rate to obtain a slow-motion video.
  • an intermediate frame image between images of two adjacent frames can be predicted, and an intermediate frame image can be inserted into the two adjacent frame images Get high frame rate image sets in between.
  • a slow-motion video with a slow rate will be obtained.
  • the video processing apparatus further performs temporal filtering processing on an image set before performing frame insertion processing, and encodes the image set after temporal filtering processing to obtain a video with a constant rate.
  • the final slow-motion video includes a video with a constant rate and a video with a slowed rate, with the effect of changing the rate.
  • the weighted fusion can ensure the smoothness of the video obtained by temporal filtering. Therefore, through the solutions of the embodiments of the present application, a complete and smooth slow-motion video can be obtained through a common video processing device.
  • FIG. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device in this embodiment may include: a processor 701 and a memory 702 .
  • the above-mentioned processor 701 and the memory 702 are connected through a bus 703 .
  • the memory 702 is used to store a computer program, the computer program includes program instructions, and the processor 701 is used to execute the program instructions stored in the memory 702 .
  • the processor 701 is configured to perform the following operations by running the executable program code in the memory 702:
  • the shooting picture is recorded at the first frame rate, and the first image set of the first time period and the second time period of the first image set are obtained respectively.
  • a second set of images where the first time period is a time period before the second time period;
  • the third image set and the fourth image set are encoded at the second frame rate, respectively, to obtain a slow-motion video.
  • the processor 701 is further configured to:
  • the motion data of each pixel of the multi-frame preview image is calculated by the motion vector method.
  • the processor 701 is configured to calculate the motion data of each pixel by using the motion vector method, including:
  • the processor 701 is configured to perform temporal filtering on the first image set to obtain a third image set at the second frame rate, including:
  • the first image set is continuously sampled according to the continuous sampling parameter to obtain multiple frames of images to be fused.
  • the continuous sampling parameter includes the continuous sampling frequency and the number of continuous sampling frames, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
  • a third image set with a second frame rate is obtained by fusing the multiple frames of images to be fused with the consecutive sampling frame numbers.
  • the processor 701 is configured to perform frame interpolation processing on the second image set to obtain a fourth image set at a third frame rate, including:
  • the intermediate frame image is inserted between the images of the two adjacent frames to obtain a fourth image set at a third frame rate.
  • the processor 701 is further configured to:
  • the recording parameters include the first frame rate, the second frame rate, and the third frame rate.
  • the processor 701 may be a central processing unit (Central Processing Unit, CPU), and the processor 701 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 702 which may include read-only memory and random access memory, provides instructions and data to the processor 701 .
  • a portion of the memory 702 may also include non-volatile random access memory, which may store the first frame rate, the second frame rate, the third frame rate, and the like.
  • the processor 701 and the memory 702 described in the embodiment of the present application may execute the implementation manner described in the flowchart of a video processing method provided in FIG. 2 or FIG. 5 in the embodiment of the present application, and may also execute the present application.
  • the embodiment provides the implementation manner described in the video processing apparatus shown in FIG. 6 , and details are not repeated here.
  • an intermediate frame image between images of two adjacent frames can be predicted, and an intermediate frame image can be inserted into two adjacent frames.
  • a high frame rate image set is obtained between frame images.
  • a slow-motion video with a slow rate will be obtained.
  • the terminal device before performing frame insertion processing, the terminal device additionally performs temporal filtering processing on an image set, and encodes the image set after temporal filtering processing to obtain a video with a constant rate.
  • the final slow-motion video includes a video with a constant rate and a video with a slowed rate, with the effect of changing the rate.
  • the weighted fusion can ensure the smoothness of the video obtained by temporal filtering. Therefore, through the solutions of the embodiments of the present application, a complete and smooth slow-motion video can be obtained through a common terminal device.
  • Embodiments of the present application further provide a chip, where the chip can execute the relevant steps of the terminal device in the foregoing method embodiments.
  • This chip is used for:
  • the shooting picture is recorded at the first frame rate, and the first image set of the first time period and the second time period of the first image set are obtained respectively.
  • a second set of images where the first time period is a time period before the second time period;
  • the third image set and the fourth image set are encoded at the second frame rate, respectively, to obtain a slow-motion video.
  • the chip is also used for:
  • the motion data of each pixel of the multi-frame preview image is calculated by the motion vector method.
  • the chip is used to calculate the motion data of each pixel by using the motion vector method, including:
  • the chip is used to perform temporal filtering on the first image set to obtain a third image set at the second frame rate, including:
  • the first image set is continuously sampled according to the continuous sampling parameter to obtain multiple frames of images to be fused.
  • the continuous sampling parameter includes the continuous sampling frequency and the number of continuous sampling frames, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
  • a third image set with a second frame rate is obtained by fusing the multiple frames of images to be fused with the consecutive sampling frame numbers.
  • the chip is used to perform frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate, including:
  • the intermediate frame image is inserted between the images of the two adjacent frames to obtain a fourth image set at a third frame rate.
  • the chip is also used for:
  • the recording parameters include the first frame rate, the second frame rate, and the third frame rate.
  • the embodiment of the present application also provides a modular device, the modular device includes a processor and a communication interface, the processor is connected to the communication interface, the communication interface is used for sending and receiving signals, and the processor is used for:
  • the shooting picture is recorded at the first frame rate, and the first image set of the first time period and the second time period of the first image set are obtained respectively.
  • a second set of images where the first time period is a time period before the second time period;
  • the third image set and the fourth image set are encoded at the second frame rate, respectively, to obtain a slow-motion video.
  • the processor is also used to:
  • the motion data of each pixel of the multi-frame preview image is calculated by the motion vector method.
  • the processor is configured to use the motion vector method to calculate the motion data of each pixel, including:
  • the processor is configured to perform temporal filtering on the first image set to obtain a third image set at the second frame rate, including:
  • the first image set is continuously sampled according to the continuous sampling parameter to obtain multiple frames of images to be fused.
  • the continuous sampling parameter includes the continuous sampling frequency and the number of continuous sampling frames, and the continuous sampling frequency is determined according to the first frame rate and the second frame rate;
  • a third image set with a second frame rate is obtained by fusing the multiple frames of images to be fused with the consecutive sampling frame numbers.
  • the processor is configured to perform frame interpolation processing on the second image set to obtain a fourth image set with a third frame rate, including:
  • the intermediate frame image is inserted between the images of the two adjacent frames to obtain a fourth image set at a third frame rate.
  • the processor is also used to:
  • the recording parameters include the first frame rate, the second frame rate, and the third frame rate.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the readable storage medium.
  • a computer program is stored in the readable storage medium.
  • the computer program can be used to implement the video processing method described in the embodiments of the present application, which is not repeated here.
  • the computer-readable storage medium may be an internal storage unit of the video processing device of any of the foregoing embodiments, such as a hard disk or a memory of the device.
  • the computer-readable storage medium can also be an external storage device of the video processing device, such as a plug-in hard disk equipped on the device, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card ( Flash Card), etc.
  • the computer-readable storage medium may also include both an internal storage unit of the video processing device and an external storage device.
  • the computer-readable storage medium is used to store computer programs and other programs and data required by the video processing apparatus.
  • the computer-readable storage medium can also be used to temporarily store data that has been or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请公开了一种视频处理方法、装置、终端设备及存储介质,其中方法包括:当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集;对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于第一帧率;对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率;以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。本申请实施例对终端设备录制的图像集进行处理,可以得到流畅的、完整的慢动作视频。

Description

视频处理方法、装置、终端设备及存储介质 技术领域
本申请涉及视频处理技术领域,尤其涉及一种视频处理方法、装置、终端设备及存储介质。
背景技术
慢动作视频是指高速摄影视频,即拍摄包含短时间内肉眼看不见的瞬间动的视频,如子弹飞出膛时的运动状态;又如,足球运动员射门时的动作过程,等等。然而,由于支持拍摄慢动作视频的高速摄影设备价格昂贵,普通用户通常无法录制慢动作视频。因此,如何使用终端设备录制慢动作视频是视频处理技术中的一个重要研究课题。
发明内容
本申请实施例提供了一种视频处理方法、装置、终端设备及存储介质,本申请实施例对终端设备录制的图像集进行处理,可以得到流畅的、完整的慢动作视频。
第一方面,本申请提供了一种视频处理方法,该方法包括:
当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长;
对第一图像集进行时域滤波,得到第二帧率的第三图像集;该第二帧率小于第一帧率;
对第二图像集进行插帧处理,得到第三帧率的第四图像集;该第三帧率大于第一帧率;
以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
第二方面,本申请实施例提供了一种视频处理装置,该装置包括:
录制单元,用于当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,该第一时间段是第二时间段之前的时长;
时域滤波单元,用于对第一图像集进行时域滤波,得到第二帧率的第三图像集;该第二帧率小于第一帧率;
插帧处理单元,用于对第二图像集进行插帧处理,得到第三帧率的第四图像集;该第三帧率大于第一帧率;
编码单元,用于以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
第三方面,本申请实施例提供了一种终端设备,包括处理器和存储器,处理器和存储 器相连,其中,该存储器用于存储程序代码,该处理器用于调用所述程序代码,以执行第一方面所述的视频处理方法。
第四方面,本申请实施例提供了一种芯片,该芯片用于:
当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,该第一时间段是第二时间段之前的时长;
对第一图像集进行时域滤波,得到第二帧率的第三图像集;该第二帧率小于第一帧率;
对第二图像集进行插帧处理,得到第三帧率的第四图像集;该第三帧率大于第一帧率;
以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
第五方面,本申请实施例提供了一种模组设备,该模组设备包括处理器和通信接口,该处理器与该通信接口相连,该通信接口用于收发信号,该处理器用于:
基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,该第一时间段是第二时间段之前的时长;
对第一图像集进行时域滤波,得到第二帧率的第三图像集;该第二帧率小于第一帧率;
对第二图像集进行插帧处理,得到第三帧率的第四图像集;该第三帧率大于第一帧率;
以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
第六方面,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述第一方面所述的视频处理方法。
本申请实施例在对终端设备采集的低帧率的图像集进行处理时,可以预测两个相邻帧的图像之间的中间帧图像,并将中间帧图像插入到两个相邻帧图像之间得到高帧率的图像集。将高帧率的图像集进行编码输出后将得到放慢速率的慢动作视频。除此之外,终端设备在进行插帧处理前,还对一个图像集进行时域滤波处理,将时域滤波处理后的图像集进行编码得到速率不变的视频,使得最终的慢动作视频包括速率不变的视频和速率放慢的视频,具有速率变化的效果。并且,加权融合可以保证时域滤波得到视频的流畅性。因此,通过本申请实施例的方案,可以通过普通的终端设备获得完整的、流畅的慢动作视频。可以使得用户更自由的录制所需的慢动作视频,提高用户体验。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种终端设备的结构示意图;
图2是本申请实施例提供的一种视频处理方法的流程示意图;
图3是本申请实施例提供的一种时域滤波的示意图;
图4是本申请实施例提供的一种插帧处理的示意图;
图5是本申请实施例提供的另一种视频处理方法的流程示意图;
图6是本申请实施例提供的一种视频处理装置的结构示意图;
图7是本申请实施例提供的另一种终端设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
通常,市场上便携式终端设备中的图像采集设备(如图像传感器)以30每秒传输帧数(Frames Per Second,fps)、60fps、120fps和240fps的录制帧率采集视频图像。然而,慢动作视频图像常常需要高帧率的图像采集设备,普通的终端设备无法录制慢动作视频图像。为了以高帧率如480fps、960fps等帧率获得慢动作视频图像,终端设备需要配置专用的图像传感器,如快速读出传感器(FRS)。但是,配置有专用图像传感器的终端设备的成本很高,普通用户无法利用普通的终端设备录制慢动作视频。因此,为了解决上述问题,本申请实施例提供了一种能够在普通的终端设备中完成慢动作视频制作的视频处理系统和视频处理方案。
请参见图1,图1示出了一种终端设备的系统架构图。该终端设备可包括但不限于智能手机、膝上计算机、平板电脑和台式计算机等等。在终端设备内,可以包括:图像采集模块101、硬件驱动模块102、图像处理模块103、编码模块104。其中,图像采集模块101用于录制拍摄画面内的图像,图像采集模块101可以是图像传感器(sensor)。硬件驱动模块102可以是图像处理驱动(Image Signal Processing driver,ISP driver)。硬件驱动模块102用于控制图像采集模块101的录制,例如硬件驱动模块102控制图像采集模块101以第一帧率录制一个时间段的图像集。图像处理模块103可以是硬件抽象层(HAL层),图像处理模块103用于对录制后的图像集进行处理。编码模块104可以是支持VSP协议的编码器,可以用于对图像处理模块103处理后的图像集进行编码。在具体实现中,本申请实施例所提出的视频处理方案具体包括:当图像处理模块103基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,图像处理模块103对硬件驱动模块102配置录制相关参数(例如第一帧率,第二帧率,第三帧率等等);硬件驱动模块102控制图像采集模块101对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的 第二图像集;然后,图像处理模块103对第一图像集进行时域滤波,得到第二帧率的第三图像集,以及对第二图像集进行插帧处理,得到第三帧率的第四图像集;最后,编码模块104分别对第三图像集和第四图像集进行编码获得慢动作视频。
由此可见,本申请实施例所提出的视频处理方案利用插帧处理将采集的低帧率图像集转换为高帧率的第四图像集,并且,在插帧处理前,进行时域滤波处理,使得慢动作视频具有速率变化的效果,且时域滤波可以保证慢动作视频中的速率变化的流畅性。因此,通过本申请实施例所提供的方案可以在终端设备中录制完整的、流畅的慢动作视频。
基于上述的视频处理方案的相关描述,本申请实施例提出了一种视频处理方法;该视频处理方法可以由上述所提及的终端设备执行。参见图2所示,该视频处理方法可包括S201-S204:
S201:当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长。
在一个可行的实施方式中,对拍摄画面以第一帧率进行录制,分别获得的第一时间段的第一图像集和第二时间段的第二图像集是在终端设备检测到触发事件后执行的。
终端设备的运动检测功能具有两种状态:开启运动检测状态和关闭运动检测状态。在一个可行的实施方式中,当终端设备处于关闭运动检测状态时,所述触发事件可以是终端设备检测到用户点击录制按钮。在另一个可行的实施方式中,当终端设备处于开启运动检测状态时,终端设备获取多帧预览图像中的像素点的运动数据,所述触发事件可以是终端设备基于多帧预览图像中的像素点的运动数据确定拍摄画面中存在运动的物体。
在一个可行的实施方式中,终端设备检测到触发事件后,终端设备获取录制参数,然后根据录制参数中包括的第一帧率对拍摄画面进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集。
其中,录制参数是根据业务需求和经验提前配置的,录制参数可以包括本文提及的第一帧率、第二帧率和第三帧率。除此之外,录制参数也还可以包括其他录制所需的参数,例如拍摄画面的大小、拍摄时的曝光时间等等,这里不做限制。
其中,第一时间段是第二时间段之前的时长。可选的,第一时间段和第二时间段所包括的时长可以是根据业务需求或者经验设置的。例如,可以设置第一时间段为1秒,第二时间段为0.25秒。更进一步的,第二时间段的时长不能超过时长阈值,时长阈值是根据终端设备的硬件设备确定的,硬件设备包括处理器的处理速率、内存的大小等等。
其中,可选的,第一帧率可以直接是终端设备的最大录制帧率。终端设备的最大录制帧率与终端设备的硬件设备(例如图像传感器)相关。例如,假设终端设备支持的最大录制帧率为240fps,那么第一帧率也为240fps。可选的,第一帧率也可以是根据经验和业务 需求设置的。这里的第一帧率可以小于或等于终端设备的最大录制帧率。例如,假设终端设备支持的最大录制帧率为240fps,那么第一帧率可以为240fps、120fps,等等。
S202:对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于第一帧率。
在一个可行的实施方式中,录制参数中包括第二帧率。终端设备计算第一帧率和第二帧率的比值,并根据第一帧率和第二帧率的比值确定连续采样频率。终端设备获取连续采样帧数,并基于该连续采样频率和连续采样帧数对第一图像集进行连续采样得到多帧待融合的图像。对多帧待融合的图像以连续采样帧数进行融合得到第二帧率的第三图像集。其中,连续采样帧数可以是根据经验和业务需求设置的。
在一个可行的实施方式中,第二帧率可以与终端设备的编码速率相等。通常,终端设备的编码速率是30fps,那么第二帧率也可以是30fps。需要明白,随着新业务场景的出现,第二帧率也可以是其他大小的帧率。这里不做限定。
在一个可行的实施方式中,对待融合图像的融合可以是指根据融合参数进行加权融合,这里的融合参数可以是根据业务和经验设置的。
为了更好的说明本方案,下面结合一个具体的例子对本申请实施例进行阐述。假设第一帧率为240fps,第二帧率为30fps,终端设备根据第一帧率和第二帧率的比值确定连续采样的频率为8。如图3所示,设第一时间段为1秒,那么终端设备在第一时间段内以第一帧率获得包含240帧图像的第一图像集。设连续采样的帧数为3,终端设备可以在240帧图像中每8帧连续采样3帧得到90帧待融合的图像集。然后再对这90帧中依次将每3帧融合成1帧,最终得到的第三图像集包含30帧图像。也就是说,1秒内的240帧图像经过时域滤波后得到1秒内包含30帧图像,即对以第一帧率240fps采集的第一图像集进行时域滤波,得到第二帧率30fps的第三图像集。
S203:对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率。
在一个可行的实施方式中,录制参数中包括第三帧率。终端设备可以根据第三帧率和第一帧率的比值确定中间帧图像的帧数,并基于中间帧图像的帧数预测第二图像集中每两个相邻帧的图像之间的中间帧图像。然后,针对每两个相邻帧的图像之间的中间帧图像,将中间帧图像插入到两个相邻帧的图像之间,获得第三帧率的第四图像集。
可选的,终端设备可以利用调用算法预测第二图像集中每两个相邻帧的图像之间的中间帧图像。这里的算法可以是用于预测图像中物体的运动轨迹的算法,例如运动矢量算法。
在一个可行的实施方式中,若在两个相邻帧之间插入较多帧数的中间帧图像,慢动作视频中的相邻图像之间变化较小,用户视觉上容易忽略图像之间的变化,从而感觉慢动作视频卡顿,慢动作视频不流畅。为了保证慢动作视频的流畅性,终端设备可以根据第一帧率和预设阈值确定第三帧率,以使得第三帧率和第一帧率的比值小于或等于预设阈值。承 接上述步骤S202描述的实例,第一帧率为240fps,假设预设阈值是4,那么可以根据第一帧率240fps和预设阈值4确定第三帧率为480fps、720fps、960fps,等等。
请参见图4所示,在第一帧率为240fps,第三帧率为960fps时,终端设备计算第三帧率和第一帧率的比值为4,然后将第三帧率和第一帧率的比值减去基数1得到中间帧图像的帧数为3。也就是说,针对每两个相邻帧的图像,终端设备可以调用算法预测这两个相邻帧的图像之间的3帧中间帧图像。如图4所示,第N帧图像和第N'帧图像是第二图像集中示例性的两个相邻帧的图像。终端设备调用算法预测这两个相邻帧图像中的3帧中间帧图像,即通过预测第N帧图像和第N'帧图像之间主要对象的运动,来生成新的3帧图像,如图4中的第1帧中间帧图像、第2帧中间帧图像和第3帧中间帧图像。如图4所示的示例中,主要对象是车辆,其基于第N帧图像和第N'帧图像中车辆的位置变化已经预测并表示在中间帧图像中。
设第二时间段为0.25秒,那么终端设备在第二时间段内以第一帧率获得包含60帧图像的第二图像集。针对第二图像集中每两个相邻帧的图像可以获得3帧中间帧图像,将该3帧中间帧图像插入到两个相邻帧的图像之间,终端设备将得到包括240帧的第四图像集。也就是说在0.25秒内的60帧图像经过插帧处理后得到0.25秒内包含240帧图像,即对以第一帧率240fps采集的第二图像集进行插帧处理,得到第三帧率960fps的第四图像集。
S204:以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
在一个可行的实施方式中,终端设备获取编码协议,然后根据编码协议以第二帧率对第三图像集进行编码,最后再以第二帧率对第四图像集进行编码,得到最终的慢动作视频。其中,第三图像集编码得到的视频在慢动作视频中的位置位于第四图像集编码得到的视频在慢动作视频中的位置之前。
承接前文所述,终端设备对1秒内的第一图像集经过时域滤波得到包含30帧图像的第三图像集,以及对0.25秒内的第二图像集经过插帧处理得到包含240帧图像的第四图像集。终端设备以第二帧率(即30fps)分别对第三图像集和第四图像集进行编码,可以得到9秒的慢动作视频。其中,在第1秒的慢动作视频内,主要对象在慢动作视频内的运动速率与主要对象的实际运动速率相等。在第2秒至第9秒的慢动作视频内,主要对象在慢动作视频内的运动速率小于主要对象的实际运动速率。
本申请实施例在对终端设备采集的低帧率的图像集进行处理时,可以预测两个相邻帧的图像之间的中间帧图像,并将所述中间帧图像插入到两个相邻帧图像之间得到高帧率的图像集。将高帧率的图像集进行编码输出后将得到放慢速率的慢动作视频。除此之外,终端设备在进行插帧处理前,还另外对一个图像集进行时域滤波处理,将时域滤波处理后的图像集进行编码得到速率不变的视频。从而,最终的慢动作视频包括速率不变的视频和速率放慢的视频,具有速率变化的效果。并且,加权融合可以保证时域滤波得到视频的流畅性。因此,通过本申请实施例的方案,可以通过普通的终端设备获得完整的、流畅的慢动 作视频。进而可以使得用户更自由的录制所需的慢动作视频,提高用户体验。
参见上述图2所示的方法实施例的相关描述可知,图2所示的触发事件可以是基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体。为了有效区分拍摄视角的改变和拍摄画面中存在的运动物体,可以采用运动矢量法计算多帧预览图像的像素点的运动数据来确定画面中是否存在运动的物体。基于此,本申请实施例还提出了另一种视频处理方法;参见图5所示,该视频处理方法可包括S501-S507:
S501:对拍摄画面内以第一帧率录制的原始图像采样得到第二帧率的多帧预览图像。
在一个可行的实施方式中,终端设备进入预览模式后,终端设备可以对拍摄画面内以第一帧率录制的原始图像采样得到第二帧率的多帧预览图像。所谓的采样是指利用采用频率对原始图像进行抽样得到需要进行运动检测的预览图像的方式。采样频率是根据第一帧率和第二帧率的比值确定的。举例来说,第一帧率为240fps,第二帧率为30fps时,可以根据第一帧率和第二帧率的比值确定采样频率为8,表明原始图像中每隔8帧需要选取一帧作为预览图像,那么,终端设备在1秒内采集的240帧图像经过采样得到30帧图像,也就是说通过采样将第一帧率录制的原始图像转换为第二帧率的多帧预览图像。
在一个可行的实施方式中,第二帧率的多帧预览图像还可以通过终端设备的显示屏预览输出。
S502:采用运动矢量法计算多帧预览图像的各个像素点的运动数据。
在一个可行的实施方式中,终端设备可以获取拍摄画面中的多帧预览图像的一个像素点,对一个像素点在多帧预览图像中的像素坐标进行矢量运算得到一个像素点的位移矢量。
其中,多帧预览图像可以是每两帧相邻的预览图像,也可以是连续n帧预览图像。终端设备可以基于块匹配算法从多帧预览图像中找到同一个像素点。举例来说,设多帧预览图像为连续3帧预览图像,连续3帧预览图像分别为第n-2帧预览图像、第n-1帧预览图像和第n帧预览图像。在第n帧预览图像中任选一个像素点A,然后根据块匹配算法在第n-1帧预览图像查找到与像素点A匹配度最高的像素点A1,以及在第n-2帧预览图像查找到与像素点A匹配度最高的像素点A2。获取像素点A,像素点A1和像素点A2的像素坐标,并对这三个像素坐标进行矢量运算得到像素点A的位移矢量。
S503:根据多帧预览图像的各个像素点的运动数据判定拍摄画面中是否存在运动的物体。
若拍摄视角发生了改变(例如用户手部抖动导致拍摄视角偏移),拍摄画面内各个像素点的运动数据均相同。因此,在一个可行的实施方式中,当各个像素点的位移矢量相同时,终端设备可以确定拍摄画面中不存在运动的物体。当各个像素点的位移矢量存在多个值时,拍摄画面内的静止的物体和运动的物体的位移矢量不同,终端设备可以确定拍摄画面中存在运动的物体。
在一个可行的实施方式中,若终端设备确定拍摄画面中存在运动的物体时,则可以触发终端设备执行步骤S504。若终端设备确定拍摄画面中不存在运动的物体时,则终端设备将重复执行步骤S502和步骤S503。承接步骤S502所述的示例,若第n帧预览图像中的各个像素点的位移矢量均相同,则终端设备可以根据第n-1帧预览图像、第n帧预览图像和第n+1帧预览图像确定第n+1帧预览图像中的各个像素点的位移矢量。并根据第n+1帧预览图像中的各个像素点的位移矢量判断拍摄画面中是否存在运动的物体。
S504:对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长。
S505:对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于第一帧率。
S506:对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率。
S507:以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
本申请实施例在对终端设备采集的低帧率的图像集进行处理时,可以预测两个相邻帧的图像之间的中间帧图像,并将中间帧图像插入到两个相邻帧图像之间得到高帧率的图像集。将高帧率的图像集进行编码输出后将得到放慢速率的慢动作视频。以及,终端设备在进行插帧处理前,还另外对一个图像集进行时域滤波处理,将时域滤波处理后的图像集进行编码得到速率不变的视频。从而,最终的慢动作视频包括速率不变的视频和速率放慢的视频,具有速率变化的效果。并且,加权融合可以保证时域滤波得到视频的流畅性。因此,通过本申请实施例的方案,可以通过普通的终端设备获得完整的、流畅的慢动作视频。除此之外,终端设备利用运动矢量算法计算多帧预览图像的像素点的运动数据来确定画面中是否存在运动的物体。可以有效区分手部抖动导致的拍摄视角的改变和拍摄画面内运动的物体,提高用户的录制体验。
下面详细阐述本申请实施例提供的一种视频处理装置,图6为本申请实施例提供的一种视频处理装置的结构示意图。如图6所示,本实施例中所描述的装置,可以包括录制单元601、时域滤波单元602、插帧处理单元603和编码单元604。
录制单元601,用于当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长;
时域滤波单元602,用于对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于第一帧率;
插帧处理单元603,用于对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率;
编码单元604,用于以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
在一些可行的实施方式中,该装置还包括计算单元605,计算单元605用于对拍摄画面内以第一帧率录制的原始图像采样得到第二帧率的多帧预览图像;
采用运动矢量法计算多帧预览图像的各个像素点的运动数据。
在一些可行的实施方式中,计算单元605采用运动矢量法计算各个像素点的运动数据,包括:
获取拍摄画面中的多帧预览图像的一个像素点;
对一个像素点在多帧预览图像中的像素坐标进行矢量运算得到一个像素点的位移矢量。
在一些可行的实施方式中,当各个像素点的位移矢量相同时,拍摄画面中不存在运动的物体;
当各个像素点的位移矢量存在多个值时,拍摄画面中存在运动的物体。
在一些可行的实施方式中,时域滤波单元602用于对第一图像集进行时域滤波,得到第二帧率的第三图像集,包括:
根据连续采样参数对第一图像集进行连续采样得到多帧待融合的图像,连续采样参数包括连续采样频率和连续采样帧数,连续采样频率是根据第一帧率和第二帧率确定的;
对多帧待融合的图像以连续采样帧数进行融合得到第二帧率的第三图像集。
在一些可行的实施方式中,插帧处理单元603用于对第二图像集进行插帧处理,得到第三帧率的第四图像集,包括:
预测第二图像集中每两个相邻帧的图像之间的中间帧图像,中间帧图像的帧数是基于第一帧率和所述第三帧率确定的;
针对每两个相邻帧的图像之间的中间帧图像,将中间帧图像插入到两个相邻帧的图像之间,获得第三帧率的第四图像集。
在一些可行的实施方式中,该装置还包括配置单元606,用于配置录制参数,录制参数包括第一帧率、第二帧率和第三帧率。
在本申请实施例中,录制单元601在当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长;然后时域滤波单元602对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于第一帧率;接着插帧处理单元603对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率;最后编码单元604以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。本申请实施例在对视频处理装置采集的低帧率的图像集进行处理时,可以预测两个相邻帧的图像之间的中间帧图像,并将中间帧图像插入到两个相邻帧 图像之间得到高帧率的图像集。将高帧率的图像集进行编码输出后将得到放慢速率的慢动作视频。除此之外,视频处理装置在进行插帧处理前,还另外对一个图像集进行时域滤波处理,将时域滤波处理后的图像集进行编码得到速率不变的视频。从而,最终的慢动作视频包括速率不变的视频和速率放慢的视频,具有速率变化的效果。并且,加权融合可以保证时域滤波得到视频的流畅性。因此,通过本申请实施例的方案,可以通过普通的视频处理装置获得完整的、流畅的慢动作视频。
请参见图7,图7为本申请实施例提供的一种终端设备的结构示意图。如图7所示的本实施例中的终端设备可以包括:处理器701和存储器702。上述处理器701、和存储器702通过总线703连接。存储器702用于存储计算机程序,所述计算机程序包括程序指令,处理器701用于执行存储器702存储的程序指令。
在本申请实施例中,处理器701用于通过运行存储器702中的可执行程序代码,执行如下操作:
当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长;
对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于第一帧率;
对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率;
以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
在一些可行的实施方式中,处理器701还用于:
对拍摄画面内以第一帧率录制的原始图像采样得到第二帧率的多帧预览图像;
采用运动矢量法计算多帧预览图像的各个像素点的运动数据。
在一些可行的实施方式中,处理器701用于采用运动矢量法计算各个像素点的运动数据,包括:
获取拍摄画面中的多帧预览图像的一个像素点;
对一个像素点在多帧预览图像中的像素坐标进行矢量运算得到一个像素点的位移矢量。
在一些可行的实施方式中,当各个像素点的位移矢量相同时,拍摄画面中不存在运动的物体;当各个像素点的位移矢量存在多个值时,拍摄画面中存在运动的物体。
在一些可行的实施方式中,处理器701用于对第一图像集进行时域滤波,得到第二帧率的第三图像集,包括:
根据连续采样参数对第一图像集进行连续采样得到多帧待融合的图像,连续采样参数包括连续采样频率和连续采样帧数,连续采样频率是根据第一帧率和第二帧率确定的;
对多帧待融合的图像以连续采样帧数进行融合得到第二帧率的第三图像集。
在一些可行的实施方式中,处理器701用于对第二图像集进行插帧处理,得到第三帧率的第四图像集,包括:
预测第二图像集中每两个相邻帧的图像之间的中间帧图像,中间帧图像的帧数是基于第一帧率和第三帧率确定的;
针对每两个相邻帧的图像之间的中间帧图像,将中间帧图像插入到两个相邻帧的图像之间,获得第三帧率的第四图像集。
在一些可行的实施方式中,处理器701还用于:
配置录制参数,录制参数包括第一帧率、第二帧率和第三帧率。
应当理解,在本申请实施例中,处理器701可以是中央处理单元(Central Processing Unit,CPU),该处理器701还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器702可以包括只读存储器和随机存取存储器,并向处理器701提供指令和数据。存储器702的一部分还可以包括非易失性随机存取存储器,可以存储第一帧率、第二帧率和第三帧率等。
具体实现中,本申请实施例中所描述的处理器701和存储器702可执行本申请实施例图2或图5提供的一种视频处理方法的流程中所描述的实现方式,也可执行本申请实施例提供图6的一种视频处理装置中所描述的实现方式,在此不再赘述。
在本申请实施例中,在对终端设备采集的低帧率的图像集进行处理时,可以预测两个相邻帧的图像之间的中间帧图像,并将中间帧图像插入到两个相邻帧图像之间得到高帧率的图像集。将高帧率的图像集进行编码输出后将得到放慢速率的慢动作视频。除此之外,终端设备在进行插帧处理前,还另外对一个图像集进行时域滤波处理,将时域滤波处理后的图像集进行编码得到速率不变的视频。从而,最终的慢动作视频包括速率不变的视频和速率放慢的视频,具有速率变化的效果。并且,加权融合可以保证时域滤波得到视频的流畅性。因此,通过本申请实施例的方案,可以通过普通的终端设备获得完整的、流畅的慢动作视频。
本申请实施例还提供了一种芯片,该芯片可以执行前述方法实施例中终端设备的相关步骤。该芯片用于:
当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,第一时间段是第二时间段之前的时长;
对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于所述第一帧率;
对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率;
以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
在一些可行的实施方式中,该芯片还用于:
对拍摄画面内以第一帧率录制的原始图像采样得到第二帧率的多帧预览图像;
采用运动矢量法计算多帧预览图像的各个像素点的运动数据。
在一些可行的实施方式中,该芯片用于采用运动矢量法计算各个像素点的运动数据,包括:
获取拍摄画面中的多帧预览图像的一个像素点;
对一个像素点在多帧预览图像中的像素坐标进行矢量运算得到一个像素点的位移矢量。
在一些可行的实施方式中,当各个像素点的位移矢量相同时,拍摄画面中不存在运动的物体;当各个像素点的位移矢量存在多个值时,拍摄画面中存在运动的物体。
在一些可行的实施方式中,该芯片用于对第一图像集进行时域滤波,得到第二帧率的第三图像集,包括:
根据连续采样参数对第一图像集进行连续采样得到多帧待融合的图像,连续采样参数包括连续采样频率和连续采样帧数,连续采样频率是根据第一帧率和第二帧率确定的;
对多帧待融合的图像以连续采样帧数进行融合得到第二帧率的第三图像集。
在一些可行的实施方式中,该芯片用于对第二图像集进行插帧处理,得到第三帧率的第四图像集,包括:
预测第二图像集中每两个相邻帧的图像之间的中间帧图像,中间帧图像的帧数是基于第一帧率和第三帧率确定的;
针对每两个相邻帧的图像之间的中间帧图像,将中间帧图像插入到两个相邻帧的图像之间,获得第三帧率的第四图像集。
在一些可行的实施方式中,该芯片还用于:
配置录制参数,录制参数包括第一帧率、第二帧率和第三帧率。
其中,该实施方式的相关内容可参见上述方法实施例的相关内容。此处不再详述。本申请实施例和上述方法实施例基于同一构思,其带来的技术效果也相同,具体原理请参照上述方法实施例的描述,在此不赘述。
本申请实施例还提供了一种模组设备,模组设备包括处理器和通信接口,处理器与通信接口相连,通信接口用于收发信号,处理器用于:
当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集, 第一时间段是第二时间段之前的时长;
对第一图像集进行时域滤波,得到第二帧率的第三图像集;第二帧率小于所述第一帧率;
对第二图像集进行插帧处理,得到第三帧率的第四图像集;第三帧率大于第一帧率;
以第二帧率分别对第三图像集和第四图像集进行编码,获得慢动作视频。
在一些可行的实施方式中,处理器还用于:
对拍摄画面内以第一帧率录制的原始图像采样得到第二帧率的多帧预览图像;
采用运动矢量法计算多帧预览图像的各个像素点的运动数据。
在一些可行的实施方式中,处理器用于采用运动矢量法计算各个像素点的运动数据,包括:
获取拍摄画面中的多帧预览图像的一个像素点;
对一个像素点在多帧预览图像中的像素坐标进行矢量运算得到一个像素点的位移矢量。
在一些可行的实施方式中,当各个像素点的位移矢量相同时,拍摄画面中不存在运动的物体;当各个像素点的位移矢量存在多个值时,拍摄画面中存在运动的物体。
在一些可行的实施方式中,处理器用于对第一图像集进行时域滤波,得到第二帧率的第三图像集,包括:
根据连续采样参数对第一图像集进行连续采样得到多帧待融合的图像,连续采样参数包括连续采样频率和连续采样帧数,连续采样频率是根据第一帧率和第二帧率确定的;
对多帧待融合的图像以连续采样帧数进行融合得到第二帧率的第三图像集。
在一些可行的实施方式中,处理器用于对第二图像集进行插帧处理,得到第三帧率的第四图像集,包括:
预测第二图像集中每两个相邻帧的图像之间的中间帧图像,中间帧图像的帧数是基于第一帧率和第三帧率确定的;
针对每两个相邻帧的图像之间的中间帧图像,将中间帧图像插入到两个相邻帧的图像之间,获得第三帧率的第四图像集。
在一些可行的实施方式中,处理器还用于:
配置录制参数,录制参数包括第一帧率、第二帧率和第三帧率。
本申请实施例还提供一种计算机可读存储介质,该可读存储介质存储有计算机程序,计算机程序被处理器执行时,可以用于实现本申请实施例描述的视频处理方法,在此不再赘述。计算机可读存储介质可以是前述任一实施例的视频处理设备的内部存储单元,例如设备的硬盘或内存。计算机可读存储介质也可以是视频处理设备的外部存储设备,例如设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,计算机可读存储介质还可以既包括视频处理 设备的内部存储单元也包括外部存储设备。计算机可读存储介质用于存储计算机程序以及视频处理设备所需的其他程序和数据。计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
以上所揭露的仅为本申请的部分实施例而已,当然不能以此来限定本申请之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本申请权利要求所作的等同变化,仍属于申请所涵盖的范围。

Claims (12)

  1. 一种视频处理方法,其特征在于,包括:
    当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对所述拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,所述第一时间段是所述第二时间段之前的时长;
    对所述第一图像集进行时域滤波,得到第二帧率的第三图像集;所述第二帧率小于所述第一帧率;
    对所述第二图像集进行插帧处理,得到第三帧率的第四图像集;所述第三帧率大于所述第一帧率;
    以所述第二帧率分别对所述第三图像集和所述第四图像集进行编码,获得慢动作视频。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    对所述拍摄画面内以所述第一帧率录制的原始图像采样得到所述第二帧率的多帧预览图像;
    采用运动矢量法计算所述多帧预览图像的各个像素点的运动数据。
  3. 如权利要求2所述的方法,其特征在于,所述采用运动矢量法计算所述各个像素点的运动数据,包括:
    获取所述拍摄画面中的所述多帧预览图像的一个像素点;
    对所述一个像素点在所述多帧预览图像中的像素坐标进行矢量运算得到所述一个像素点的位移矢量。
  4. 如权利要求3所述的方法,其特征在于,当所述各个像素点的位移矢量相同时,所述拍摄画面中不存在运动的物体;
    当所述各个像素点的位移矢量存在多个值时,所述拍摄画面中存在运动的物体。
  5. 如权利要求1所述的方法,其特征在于,所述对所述第一图像集进行时域滤波,得到第二帧率的第三图像集,包括:
    根据连续采样参数对所述第一图像集进行连续采样得到多帧待融合的图像,所述连续采样参数包括连续采样频率和连续采样帧数,所述连续采样频率是根据所述第一帧率和所述第二帧率确定的;
    对所述多帧待融合的图像以所述连续采样帧数进行融合得到所述第二帧率的第三图像集。
  6. 如权利要求1所述的方法,其特征在于,所述对所述第二图像集进行插帧处理,得到第三帧率的第四图像集,包括:
    预测所述第二图像集中每两个相邻帧的图像之间的中间帧图像,所述中间帧图像的帧数是基于所述第一帧率和所述第三帧率确定的;
    针对每两个相邻帧的图像之间的中间帧图像,将所述中间帧图像插入到所述两个相邻帧的图像之间,获得所述第三帧率的第四图像集。
  7. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    配置录制参数,所述录制参数包括所述第一帧率、所述第二帧率和所述第三帧率。
  8. 一种视频处理装置,其特征在于,包括:
    录制单元,用于当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对所述拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,所述第一时间段是所述第二时间段之前的时长;
    时域滤波单元,用于对所述第一图像集进行时域滤波,得到第二帧率的第三图像集;所述第二帧率小于所述第一帧率;
    插帧处理单元,用于对所述第二图像集进行插帧处理,得到第三帧率的第四图像集;所述第三帧率大于所述第一帧率;
    编码单元,用于以所述第二帧率分别对所述第三图像集和所述第四图像集进行编码,获得慢动作视频。
  9. 一种终端设备,其特征在于,包括处理器和存储器,所述处理器和存储器相连,其中,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,以执行如权利要求1至7任意一项所述的视频处理方法。
  10. 一种芯片,其特征在于,
    所述芯片用于当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对所述拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,所述第一时间段是所述第二时间段之前的时长;
    对所述第一图像集进行时域滤波,得到第二帧率的第三图像集;所述第二帧率小于所述第一帧率;
    对所述第二图像集进行插帧处理,得到第三帧率的第四图像集;所述第三帧率大于所述第一帧率;
    以所述第二帧率分别对所述第三图像集和所述第四图像集进行编码,获得慢动作视频。
  11. 一种模组设备,其特征在于,所述模组设备包括处理器和通信接口,所述处理器与所述通信接口相连,所述通信接口用于收发信号,所述处理器用于:
    当基于多帧预览图像的像素点的运动数据确定拍摄画面中存在运动的物体时,对所述拍摄画面以第一帧率进行录制,分别获得第一时间段的第一图像集和第二时间段的第二图像集,所述第一时间段是所述第二时间段之前的时长;
    对所述第一图像集进行时域滤波,得到第二帧率的第三图像集;所述第二帧率小于所述第一帧率;
    对所述第二图像集进行插帧处理,得到第三帧率的第四图像集;所述第三帧率大于所述第一帧率;
    以所述第二帧率分别对所述第三图像集和所述第四图像集进行编码,获得慢动作视频。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述权利要求1至7任意一项所述的视频处理方法。
PCT/CN2021/126794 2020-11-26 2021-10-27 视频处理方法、装置、终端设备及存储介质 WO2022111198A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011353666.7A CN112532880B (zh) 2020-11-26 2020-11-26 视频处理方法、装置、终端设备及存储介质
CN202011353666.7 2020-11-26

Publications (1)

Publication Number Publication Date
WO2022111198A1 true WO2022111198A1 (zh) 2022-06-02

Family

ID=74994094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126794 WO2022111198A1 (zh) 2020-11-26 2021-10-27 视频处理方法、装置、终端设备及存储介质

Country Status (2)

Country Link
CN (1) CN112532880B (zh)
WO (1) WO2022111198A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116684668A (zh) * 2023-08-03 2023-09-01 湖南马栏山视频先进技术研究院有限公司 一种自适应的视频帧处理方法及播放终端
CN117315574A (zh) * 2023-09-20 2023-12-29 北京卓视智通科技有限责任公司 一种盲区轨迹补全的方法、系统、计算机设备和存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532880B (zh) * 2020-11-26 2022-03-11 展讯通信(上海)有限公司 视频处理方法、装置、终端设备及存储介质
CN113067994B (zh) * 2021-03-31 2022-08-19 联想(北京)有限公司 一种视频录制方法及电子设备
CN113099132B (zh) * 2021-04-19 2023-03-21 深圳市帧彩影视科技有限公司 视频处理方法、装置、电子设备、存储介质及程序产品
CN114390236A (zh) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 视频处理方法、装置、计算机设备和存储介质
CN117014686A (zh) * 2022-04-29 2023-11-07 荣耀终端有限公司 一种视频处理方法及电子设备
CN115835035A (zh) * 2022-11-17 2023-03-21 歌尔科技有限公司 图像插帧方法、装置、设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108600615A (zh) * 2018-04-04 2018-09-28 深圳市语图科技有限公司 一种慢动作录像方法及装置
JP2019062427A (ja) * 2017-09-27 2019-04-18 キヤノン株式会社 撮像装置、撮像装置の制御方法およびプログラム
CN110086905A (zh) * 2018-03-26 2019-08-02 华为技术有限公司 一种录像方法及电子设备
CN110636375A (zh) * 2019-11-11 2019-12-31 RealMe重庆移动通信有限公司 视频流处理方法、装置、终端设备及计算机可读存储介质
CN111010521A (zh) * 2018-10-04 2020-04-14 三星电子株式会社 在便携式电子设备中记录超慢动作视频的方法和系统
CN112532880A (zh) * 2020-11-26 2021-03-19 展讯通信(上海)有限公司 视频处理方法、装置、终端设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282475B (zh) * 2014-06-27 2019-05-28 澜至电子科技(成都)有限公司 移动字幕检测与补偿方法及系统
CN108184165B (zh) * 2017-12-28 2020-08-07 Oppo广东移动通信有限公司 视频播放方法、电子装置和计算机可读存储介质
CN110933315B (zh) * 2019-12-10 2021-09-07 Oppo广东移动通信有限公司 图像数据处理方法及相关设备
CN111225150B (zh) * 2020-01-20 2021-08-10 Oppo广东移动通信有限公司 插帧处理方法及相关产品
CN111277779B (zh) * 2020-03-05 2022-05-06 Oppo广东移动通信有限公司 一种视频处理方法及相关装置
CN111586409B (zh) * 2020-05-14 2022-06-10 Oppo广东移动通信有限公司 插值帧的生成方法、装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019062427A (ja) * 2017-09-27 2019-04-18 キヤノン株式会社 撮像装置、撮像装置の制御方法およびプログラム
CN110086905A (zh) * 2018-03-26 2019-08-02 华为技术有限公司 一种录像方法及电子设备
CN108600615A (zh) * 2018-04-04 2018-09-28 深圳市语图科技有限公司 一种慢动作录像方法及装置
CN111010521A (zh) * 2018-10-04 2020-04-14 三星电子株式会社 在便携式电子设备中记录超慢动作视频的方法和系统
CN110636375A (zh) * 2019-11-11 2019-12-31 RealMe重庆移动通信有限公司 视频流处理方法、装置、终端设备及计算机可读存储介质
CN112532880A (zh) * 2020-11-26 2021-03-19 展讯通信(上海)有限公司 视频处理方法、装置、终端设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116684668A (zh) * 2023-08-03 2023-09-01 湖南马栏山视频先进技术研究院有限公司 一种自适应的视频帧处理方法及播放终端
CN116684668B (zh) * 2023-08-03 2023-10-20 湖南马栏山视频先进技术研究院有限公司 一种自适应的视频帧处理方法及播放终端
CN117315574A (zh) * 2023-09-20 2023-12-29 北京卓视智通科技有限责任公司 一种盲区轨迹补全的方法、系统、计算机设备和存储介质
CN117315574B (zh) * 2023-09-20 2024-06-07 北京卓视智通科技有限责任公司 一种盲区轨迹补全的方法、系统、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112532880B (zh) 2022-03-11
CN112532880A (zh) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2022111198A1 (zh) 视频处理方法、装置、终端设备及存储介质
WO2021175055A1 (zh) 视频处理方法及相关装置
CN109413563B (zh) 视频的音效处理方法及相关产品
KR101526081B1 (ko) 감시 카메라로 촬영된 디지털 비디오 스트림을 제어가능하게 시청하는 시스템 및 방법
EP2998960B1 (en) Method and device for video browsing
WO2021031850A1 (zh) 图像处理的方法、装置、电子设备及存储介质
US20130044228A1 (en) Motion-Based Video Stabilization
US9826276B2 (en) Method and computing device for performing virtual camera functions during playback of media content
CN113067994B (zh) 一种视频录制方法及电子设备
US10154228B1 (en) Smoothing video panning
US9973707B2 (en) Image processing method and apparatus and system for dynamically adjusting frame rate
WO2018095252A1 (zh) 视频录制方法及装置
CN113099272A (zh) 视频处理方法及装置、电子设备和存储介质
US20140082208A1 (en) Method and apparatus for multi-user content rendering
CN113038010B (zh) 视频处理方法、视频处理装置、存储介质与电子设备
CN110806912B (zh) 界面处理方法及相关设备
US11128835B2 (en) Data transmission method, camera and electronic device
EP3629577A1 (en) Data transmission method, camera and electronic device
US11146762B2 (en) Methods and systems for reconstructing a high frame rate high resolution video
CN115278047A (zh) 拍摄方法、装置、电子设备和存储介质
CN108933881B (zh) 一种视频处理方法及装置
EP2988502B1 (en) Method for transmitting data
US11037599B2 (en) Automatic slow motion video recording
WO2021068729A1 (zh) 图像合成方法、装置、电子设备及计算机可读存储介质
WO2015081528A1 (en) Causing the display of a time domain video image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896700

Country of ref document: EP

Kind code of ref document: A1