CN110996170A - Video file playing method and related equipment - Google Patents

Video file playing method and related equipment Download PDF

Info

Publication number
CN110996170A
CN110996170A CN201911261099.XA CN201911261099A CN110996170A CN 110996170 A CN110996170 A CN 110996170A CN 201911261099 A CN201911261099 A CN 201911261099A CN 110996170 A CN110996170 A CN 110996170A
Authority
CN
China
Prior art keywords
video file
frame
images
video
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911261099.XA
Other languages
Chinese (zh)
Other versions
CN110996170B (en
Inventor
郑超
范泽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911261099.XA priority Critical patent/CN110996170B/en
Publication of CN110996170A publication Critical patent/CN110996170A/en
Application granted granted Critical
Publication of CN110996170B publication Critical patent/CN110996170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen

Abstract

The application discloses a video file playing method and related equipment, which are applied to electronic equipment comprising a Central Processing Unit (CPU), a video processing module, a Graphics Processing Unit (GPU), a frame insertion chip and a display screen, wherein the method comprises the following steps: the CPU obtains a first video file; if the frame rate of the first video file is greater than or equal to the first threshold, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file; the GPU carries out rendering processing on the second video file to obtain a third video file; the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file; the CPU controls the display screen to play the fourth video file, and by adopting the embodiment of the application, the power consumption of the electronic equipment can be reduced while the playing fluency of the video file is improved.

Description

Video file playing method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a video file playing method and related devices.
Background
The playing of video files has become an indispensable function of electronic devices. At present, the shooting of the television dramas and the movies is basically in a 24P/30P video recording format, and the shooting mode has long exposure time and low fluency. In the conventional method, the frame rate of a video source file is directly increased to improve the fluency of playing the video file, but the method can greatly improve the power consumption of the electronic equipment. Therefore, how to reduce the power consumption of the electronic device while ensuring high fluency of playing the video file is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a video file playing method and related equipment, which are used for reducing the power consumption of electronic equipment while improving the playing fluency of video files.
In a first aspect, an embodiment of the present application provides a video file playing method, which is applied to an electronic device including a central processing unit CPU, a video processing module, a graphics processing unit GPU, a frame insertion chip, and a display screen, and the method includes:
the CPU acquires a first video file;
if the frame rate of the first video file is greater than or equal to a first threshold value, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file;
the GPU carries out rendering processing on the second video file to obtain a third video file;
the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file;
and the CPU controls the display screen to play the fourth video file.
In a second aspect, an embodiment of the present application provides a video file playing apparatus, which is applied to an electronic device including a central processing unit CPU, a video processing module, a graphics processing unit GPU, a frame insertion chip, and a display screen, and the apparatus includes:
an acquisition unit configured to acquire a first video file by the CPU;
a frame dropping unit, configured to perform frame dropping processing on the first video file through the video processing module to obtain a second video file if the frame rate of the first video file is greater than or equal to a first threshold, where the frame rate of the second video file is less than the frame rate of the first video file;
the rendering unit is used for rendering the second video file through the GPU to obtain a third video file;
the frame interpolation unit is used for performing frame interpolation processing on the third video file through the frame interpolation chip to obtain a fourth video file, and the frame rate of the fourth video file is greater than that of the second video file;
and the playing unit is used for controlling the display screen to play the fourth video file through the CPU.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing some or all of the steps described in the method according to the first aspect of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement part or all of the steps described in the method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a CPU acquires a first video file; if the frame rate of the first video file is greater than or equal to the first threshold, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file; the GPU carries out rendering processing on the second video file to obtain a third video file; the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file; the CPU controls the display screen to play a fourth video file; firstly, because the GPU carries out frame reduction processing on the first video file firstly and then carries out rendering processing, the frame rate of the video file after frame reduction is lower, the power consumption of the GPU can be reduced in the rendering process; and secondly, performing frame interpolation, wherein the frame rate of the video file after frame interpolation is greater than that of the video file before frame interpolation, so that the fluency of playing the video file is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic flowchart of a video file playing method according to an embodiment of the present application;
fig. 2B is a schematic structural diagram of a decomposed gaussian pyramid according to an embodiment of the present disclosure;
fig. 2C is a schematic structural diagram of a frame interpolation method according to an embodiment of the present application;
fig. 2D is a schematic structural diagram of another frame interpolation method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another video file playing method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video file playing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device comprises a processor, a Memory, a signal processor, a communication interface, a display screen, a loudspeaker, a microphone, a Random Access Memory (RAM), a frame insertion chip, a video processing module, a camera module, a sensor and the like. The device comprises a memory, a signal processor, a display screen, a loudspeaker, a microphone, an RAM, a camera module, a sensor, a frame insertion chip and a video processing module, wherein the memory, the signal processor, the display screen, the loudspeaker, the microphone, the RAM, the camera module, the sensor, the frame insertion chip and the video processing module are connected with the processor, and a communication interface is connected with the signal processor.
The Display screen may be a Liquid Crystal Display (LCD), an Organic or inorganic Light-Emitting Diode (OLED), an active matrix Organic Light-Emitting Diode (AMOLED), or the like.
The camera module can include a common camera and an infrared camera, and is not limited herein. The camera may be a front camera or a rear camera, and is not limited herein.
Wherein the sensor comprises at least one of: light sensors, gyroscopes, Infrared light (IR) sensors, fingerprint sensors, pressure sensors, and the like. Among them, the light sensor, also called an ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a light sensitive element and an analog to digital converter. The photosensitive element is used for converting collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The processor is a control center of the electronic equipment, various interfaces and lines are used for connecting all parts of the whole electronic equipment, and various functions and processing data of the electronic equipment are executed by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the electronic equipment is monitored integrally.
The processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The processor includes a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU). The CPU is one of the main devices of an electronic computer, and is a core accessory in the computer. Its functions are mainly to interpret computer instructions and to process data in computer software. The CPU is the core component of the computer responsible for reading, decoding and executing instructions. The central processor mainly comprises two parts, namely a controller and an arithmetic unit, and also comprises a cache memory and a bus for realizing data and control of the connection between the cache memory and the arithmetic unit. The three major core components of the computer are the CPU, internal memory, and input/output devices. The central processing unit mainly has the functions of processing instructions, executing operations, controlling time and processing data. The GPU is also called a display core, a visual processor, and a display chip, and is a microprocessor that is dedicated to image and graphic related operations on personal computers, workstations, game machines, and some mobile devices (e.g., tablet computers, smart phones, etc.). The GPU reduces the dependence of the graphics card on the CPU, and performs part of the original CPU work, and particularly, the core technologies adopted by the GPU in 3D graphics processing include hardware T & L (geometric transformation and illumination processing), cubic environment texture mapping and vertex mixing, texture compression and bump mapping, a dual-texture four-pixel 256-bit rendering engine, and the like, and the hardware T & L technology can be said to be a mark of the GPU.
The memory is used for storing software programs and/or modules, and the processor executes various functional applications and data processing of the electronic equipment by operating the software programs and/or modules stored in the memory. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, a software program required by at least one function and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The frame insertion chip is used for adding one frame or a plurality of frames in every two frames of pictures displayed by the original pictures, so that the display time between every two frames is shortened, the frame rate of screen display of the electronic equipment is improved, the problems of flicker and tailing of the electronic equipment are solved, the image edge blurring phenomenon of the fast moving pictures is eliminated, and the illusion formed by the visual persistence of human eyes is corrected, thereby effectively improving the picture stability.
The video processing module is opposite to the frame insertion chip in function and is used for deleting one or more frames from a multi-frame image displayed by an original image, so that the display time between each frame is prolonged, and the frame rate of screen display of the electronic equipment is reduced. The screening algorithm of the video processing module may be, for example, to only retain a single frame or only retain a double frame, or to retain a part of the single frame and a part of the double frame, and if the frame number is still too large after the first frame reduction, the second frame reduction may be performed.
The working principle of the embodiment of the application is as follows: the CPU obtains a first video file; if the frame rate of the first video file is greater than or equal to the first threshold, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file; the GPU carries out rendering processing on the second video file to obtain a third video file; the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file; and the CPU controls the display screen to play the fourth video file.
It can be seen that, in the embodiment of the present application, firstly, since the GPU performs frame dropping processing on the first video file, and then performs rendering processing, the frame rate of the video file after frame dropping is low, the power consumption of the GPU can be reduced in the rendering process; and secondly, performing frame interpolation, wherein the frame rate of the video file after frame interpolation is greater than that of the video file before frame interpolation, so that the fluency of playing the video file is improved.
As shown in fig. 2A, fig. 2A is a schematic flowchart of a video file playing method provided in an embodiment of the present application, and is applied to the electronic device shown in fig. 1, where the electronic device includes a CPU, a GPU, an interpolation chip, and a display screen, and the method includes:
step 201: the CPU obtains a first video file.
Step 202: if the frame rate of the first video file is greater than or equal to a first threshold, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file.
Step 203: and the GPU carries out rendering processing on the second video file to obtain a third video file.
Step 204: and the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file.
Step 205: and the CPU controls the display screen to play the fourth video file.
The first video file may be a video source file or a motion picture such as GIF sent by another electronic device, or may be a video source file or a motion picture such as GIF stored in a database associated with the electronic device. The Frame rate (Frame rate) is the frequency (rate) at which bitmap images called units of frames appear continuously on the display. The first threshold may be, for example, 12fps, 15fps, 24fps, or other values, which are not limited herein.
The frame rate of the fourth video file may be greater than the frame rate of the first video file, or may be less than or equal to the frame rate of the first video file, which is not limited herein. The frame rate of the second video file is equal to the frame rate of the third video file.
For example, the frame rate of the first video file is 30fps, the video processing module can perform screening processing of a certain frame rate on the first video file to obtain a second video file, and the frame rate of the video file is reduced from 30fps to 12 fps; the GPU draws an image of the fourth video file which is received at 12fps to obtain a third video file, and provides the third video file to the frame insertion chip; the frame interpolation chip raises the frame rate to 60fps in a frame interpolation mode to obtain a fourth video file, so that a picture which is smoother than the 30fps frame rate of the first video file can be obtained. The frame rate of the second video file processed by the GPU end is 12fps, so that the frame rate is reduced by 18fps relative to the frame rate of the first video file, and the power consumption required by the GPU is reduced by about 380 mW. And for the frame interpolation chip, the power consumption is basically increased by about 60 mW.
It can be seen that, in the embodiment of the present application, a CPU acquires a first video file; if the frame rate of the first video file is greater than or equal to the first threshold, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file; the GPU carries out rendering processing on the second video file to obtain a third video file; the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file; the CPU controls the display screen to play a fourth video file; firstly, because the GPU carries out frame reduction processing on the first video file firstly and then carries out rendering processing, the frame rate of the video file after frame reduction is lower, the power consumption of the GPU can be reduced in the rendering process; and secondly, performing frame interpolation, wherein the frame rate of the video file after frame interpolation is greater than that of the video file before frame interpolation, so that the fluency of playing the video file is improved.
In an implementation manner of the present application, the method further includes:
if the frame rate of the first video file is smaller than the first threshold value, the GPU performs rendering processing on the first video file to obtain a fifth video file;
the frame interpolation chip performs frame interpolation processing on the fifth video file to obtain a sixth video file, wherein the frame rate of the sixth video file is greater than that of the first video file;
and the CPU controls the display screen to play the sixth video file.
And the frame rate of the fifth video file is the same as that of the first video file.
For example, currently, in the prior art, frame interpolation and rendering processing are performed on a first video file directly, for a first video file with a resolution of 1080 × 2340, a frame rate is increased from 30fps to 60fps, and power consumption is increased by 644mW, whereas according to the embodiment of the present application, for a first video file with the same resolution of 1080 × 2340, the frame rate is increased from 30fps to 60fps through a frame interpolation chip after rendering, and power consumption is increased by 36 mW.
As can be seen, in the embodiment of the present application, if the frame rate of the first video file is less than the first threshold, the GPU performs rendering processing on the first video file to obtain a fifth video file; the frame interpolation chip performs frame interpolation processing on the fifth video file to obtain a sixth video file, wherein the frame rate of the sixth video file is greater than that of the first video file; the CPU controls the display screen to play a sixth video file; firstly, because the GPU carries out rendering processing on a first video file first, the frame rate of the first video file is low, and therefore the power consumption of the GPU is low in the rendering process; and secondly, the frame rate of the sixth video file is greater than that of the first video file, so that the fluency of playing the video files is improved.
In an implementation manner of the present application, the first video file includes N frames of first images, the second video file includes P frames of first images, and N and P are both integers greater than 1; the video processing module performs frame dropping processing on the first video file to obtain a second video file, and the method comprises the following steps:
the video processing module equally divides the N first images into (N-P) first image sets, each first image set comprises N/(N-P) first images, and the N-P and the N/(N-P) are positive integers;
the video processing module deletes one frame of the first images in each first image set to obtain (N-P) second image sets, wherein each second image set comprises [ N/(N-P) ] -1 frame of the first images, and the (N-P) second image sets comprise the P frames of the first images in total.
For example, the first video file includes 24 frames of first images, the second video file includes 12 frames of first images, the 24 frames of first images are sequentially divided into 12 first image sets, each first image set includes 2 frames of first images, and then one frame of first image in each first image set is deleted, that is, odd frame images may be deleted, even frame images may be deleted, and part of odd number images and part of even number images may be deleted.
It can be seen that, in the embodiment of the present application, the video processing module equally divides N frames of first images into (N-P) first image sets, then deletes one frame of first image in each first image set, and equally divides the first image to delete the first image, so that it can be avoided that more first images are deleted in a certain region, and fewer first images are deleted in a certain region, which affects the fluency of the original video file.
Further, the first video file comprises N frames of first images, the second video file comprises P frames of first images, and both N and P are integers greater than 1; the video processing module performs frame dropping processing on the first video file to obtain a second video file, and the method comprises the following steps:
the video processing module determines a third offset vector between every two adjacent first images in the N first images to obtain N-1 third offset vectors;
the video processing module determines N-1 first modular lengths of the N-1 third offset vectors, wherein the N-1 third offset vectors correspond to the N-1 first modular lengths one to one;
the video processing module determines a first modular length of the N-1 first modular lengths that is less than or equal to a second threshold;
the video processing module determines a first image corresponding to the first modular length smaller than or equal to a second threshold value;
and the video processing module deletes the first image corresponding to the first modular length smaller than or equal to the second threshold value to obtain a P frame first image.
For example, the first image i and the first image j are two adjacent frames of first images, and based on the two frames of first images, a third offset vector k may be determined, where a first modulo length corresponding to the third offset vector is | k |, and | k | is smaller than a second threshold, and then the first image i or the first image j is deleted.
Further, the video processing module determines a third offset vector between every two adjacent first images in the N first images to obtain N-1 third offset vectors, including:
the video processing module determines the pixel corresponding relation between every two adjacent first images in the N frames of first images;
and the video processing module determines a third offset vector between every two adjacent first images in the N frames of first images based on the pixel correspondence to obtain N-1 third offset vectors.
For example, the first image i includes a plurality of first pixels, the first image j includes a plurality of second pixels, the plurality of first pixels correspond to the plurality of second pixels one-to-one, each first pixel may be represented by a first coordinate, each second pixel may be represented by a second coordinate, and a vector may be determined by the first coordinate of the first pixel and the second coordinate of its corresponding second pixel, so that a plurality of vectors may be determined for the first image i and the first image j, and the sum of the plurality of vectors is the third offset vector.
It can be seen that, in the embodiment of the present application, the video processing module determines a third offset vector between every two adjacent first images in the N frames of first images, and then deletes the first image based on the first modular length of the third offset vector, selectively deletes the first image with less change in the video file, and maximally guarantees the original content of the video file.
In an implementation manner of the present application, the third video file includes a P frame second image, the fourth video file includes a P + Q frame second image, and Q is a positive integer; the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, and the method comprises the following steps:
the frame interpolation chip determines a first offset vector between every two adjacent second images in the P frame second image to obtain P-1 first offset vectors;
and the frame interpolation chip interpolates a Q frame second image in the P frame second image based on the P-1 first offset vectors to obtain the P + Q frame second image.
The method of determining the first offset vector may be the same as the method of determining the third offset vector, or may be different from the method of determining the third offset vector, and is not limited herein.
In an implementation manner of the present application, the determining, by the frame interpolation chip, a first offset vector between every two adjacent second images in the P-frame second image to obtain P-1 first offset vectors includes:
the frame interpolation chip carries out Gaussian pyramid decomposition on the P frame second image to obtain P third image sets;
and the frame interpolation chip determines a first offset vector based on every two adjacent third image sets in the P third image sets to obtain P-1 first offset vectors.
And performing Gaussian pyramid decomposition, namely performing down-sampling on the m-th-level second image by using a low-pass filter in a step of 2 to obtain an m-1-level second image, wherein the resolution and the size of the m-1-level second image are lower than those of the m-th-level second image, and then performing successive iteration until a 0-level second image is obtained, wherein each second image set comprises the second images from the 0 th level to the m-th level. As shown in fig. 2B, fig. 2B is a schematic structural diagram of a decomposition of a gaussian pyramid according to an embodiment of the present disclosure. The initial resolution of the second image is 256 × 256, and the second image with the resolution of 32 × 32 is obtained through 3 times of decomposition, and the second image set includes the second images with the resolutions of 256 × 256, 128 × 128, 64 × 64, and 32 × 32.
For another example, a first offset vector is determined based on each two adjacent second image sets in the P second image sets, and a second image with the same resolution is taken from each two adjacent second image sets, and then the first offset vector is obtained in the same manner as the third offset vector.
In an implementation manner of the present application, the frame interpolation chip interpolating Q frame second images in the P frame second images based on the P-1 first offset vectors includes:
if the P-1 first offset vectors do not comprise 0 vectors, the frame interpolation chip interpolates Q/(P-1) frame second images between every two adjacent frame second images in the P frame second images;
and if the P-1 first offset vectors comprise 0 vectors, the frame interpolation chip determines second images corresponding to non-0 vectors and inserts Q frame second images among the second images corresponding to the non-0 vectors.
As shown in fig. 2C, fig. 2C is a schematic structural diagram of a frame interpolation method according to an embodiment of the present disclosure. Fig. 2C corresponds to the case where P-1 first offset vectors do not include a 0 vector, and Q is P-1, i.e., one frame of second image is inserted between every two adjacent frames of second images.
As shown in fig. 2D, fig. 2D is a schematic structural diagram of another frame interpolation method provided in the embodiment of the present application. Fig. 2D corresponds to a case where P-1 first offset vectors include a 0 vector, and if the P-1 first offset vectors include a 0 vector, the second image is not inserted, and if the P-1 first offset vectors include a 0 vector, the second image is inserted based on the magnitude of the modulo length of the vector, and the greater the modulo length is, the more the second images are inserted between the two frames of the second image.
In an implementation manner of the present application, before the frame interpolation chip inserts a Q/(P-1) frame second image between every two adjacent frame second images in the P frame second image, the method further includes:
the frame interpolation chip determines Q second offset vectors based on the P-1 first offset vectors, wherein each first offset vector is used for determining Q/(P-1) second offset vectors;
and the frame interpolation chip determines Q second images based on the Q second offset vectors, and the Q second images correspond to the Q second offset vectors in a one-to-one mode.
Further, the frame interpolation chip determines Q second offset vectors based on the P-1 first offset vectors, each first offset vector being used to determine Q/(P-1) second offset vectors, including:
and the frame interpolation chip determines Q/(P-1) second offset vectors corresponding to each first offset vector based on each first offset vector to obtain Q second offset vectors.
Specifically, the frame interpolation chip equally divides each first offset vector into Q/(P-1) second offset vectors on average, and the vector sum of the Q/(P-1) second offset vectors is the first offset vector.
Further, the frame interpolation chip determines a Q-frame second image based on the Q second offset vectors, including:
and the frame interpolation chip translates the pixels included in each second image according to the corresponding Q/(P-1) second offset vectors to obtain Q/(P-1) images.
It can be seen that in the embodiment of the application, when the first offset vector is not a 0 vector, the frame is inserted by an equal division method, so that the fluency of the video file is improved; when the first offset vector is a 0 vector, the frame interpolation is not performed between the two corresponding frames of second images, so that the meaningless frame interpolation is reduced, and the power consumption of a frame interpolation chip is reduced.
As shown in fig. 3, fig. 3 is a schematic flowchart of another video file playing method provided in this embodiment of the present application, and is applied to the electronic device shown in fig. 1, where the electronic device includes a CPU, a GPU, an interpolation chip, a video processing module, and a display screen, and the method includes:
step 301: the CPU obtains a first video file, wherein the first video file comprises N frames of first images, and N is an integer greater than 1.
Step 302: the CPU determines whether a frame rate of the first video file is less than a first threshold.
If yes, go to step 314;
if not, go to step 303.
Step 303: the video processing module equally divides the N first images into (N-P) first image sets, each first image set comprises N/(N-P) first images, and the N-P and the N/(N-P) are positive integers.
Step 304: the video processing module deletes one frame of the first images in each first image set to obtain (N-P) second image sets, each second image set comprises [ N/(N-P) ] -1 frame of the first images, the (N-P) second image sets comprise P frames of the first images in total, a second video file comprises the P frames of the first images, P is an integer greater than 1, and the frame rate of the second video file is less than that of the first video file.
Step 305: and the GPU carries out rendering processing on the second video file to obtain a third video file, wherein the third video file comprises a P frame second image.
Step 306: and the frame interpolation chip carries out Gaussian pyramid decomposition on the P frame second image to obtain P third image sets.
Step 307: and the frame interpolation chip determines a first offset vector based on every two adjacent third image sets in the P third image sets to obtain P-1 first offset vectors.
Step 308: the frame interpolation chip determines whether the P-1 first offset vectors include a 0 vector.
If not, go to step 309;
if yes, go to step 310.
Step 309: the frame interpolation chip interpolates a Q/(P-1) frame second image between every two adjacent frames of second images in the P frame second images to obtain a P + Q frame second image, a fourth video file comprises the P + Q frame second images, Q is a positive integer, and the frame rate of the fourth video file is greater than that of the second video file, and then step 313 is executed.
Step 310: the frame interpolation chip determines Q second offset vectors based on the P-1 first offset vectors, each first offset vector being used to determine Q/(P-1) second offset vectors.
Step 311: and the frame interpolation chip determines Q second images based on the Q second offset vectors, and the Q second images correspond to the Q second offset vectors in a one-to-one mode.
Step 312: the frame interpolation chip determines second images corresponding to non-0 vectors, and inserts the Q frame second images between the second images corresponding to the non-0 vectors to obtain P + Q frame second images, a fourth video file comprises the P + Q frame second images, Q is a positive integer, and the frame rate of the fourth video file is greater than that of the second video file.
Step 313: and the CPU controls the display screen to play the fourth video file.
Step 314: and the GPU carries out rendering processing on the first video file to obtain a fifth video file.
Step 315: and the frame interpolation chip performs frame interpolation processing on the fifth video file to obtain a sixth video file, wherein the frame rate of the sixth video file is greater than that of the first video file.
Step 316: and the CPU controls the display screen to play the sixth video file.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
In accordance with the embodiments shown in fig. 2A and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device includes a central processing unit CPU, a video processing module, a graphics processing unit GPU, a frame insertion chip, and a display screen, and as shown in the figure, the electronic device further includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
acquiring a first video file through the CPU;
if the frame rate of the first video file is greater than or equal to a first threshold value, performing frame reduction processing on the first video file through the video processing module to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file;
rendering the second video file through the GPU to obtain a third video file;
performing frame interpolation processing on the third video file through the frame interpolation chip to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file;
and controlling the display screen to play the fourth video file through the CPU.
In an implementation manner of the present application, the program includes instructions for further performing the following steps:
if the frame rate of the first video file is smaller than the first threshold value, performing rendering processing on the first video file through the GPU to obtain a fifth video file;
performing frame interpolation processing on the fifth video file through the frame interpolation chip to obtain a sixth video file, wherein the frame rate of the sixth video file is greater than that of the first video file;
and controlling the display screen to play the sixth video file through the CPU.
In an implementation manner of the present application, the first video file includes N frames of first images, the second video file includes P frames of first images, and N and P are both integers greater than 1; in the aspect of obtaining a second video file by performing frame dropping processing on the first video file through the video processing module, the program includes instructions specifically configured to perform the following steps:
equally dividing, by the video processing module, the N first images into (N-P) first image sets, each first image set comprising N/(N-P) first images, the N-P, the N/(N-P) being positive integers;
deleting, by the video processing module, one frame of the first image in each of the first image sets, resulting in (N-P) second image sets, each of which includes [ N/(N-P) ] -1 frame of the first image, the (N-P) second image sets collectively including the P frames of the first image.
In an implementation manner of the present application, the third video file includes a P frame second image, the fourth video file includes a P + Q frame second image, and Q is a positive integer; in the aspect of obtaining a fourth video file by performing frame interpolation on the third video file through the frame interpolation chip, the program includes instructions specifically configured to perform the following steps:
determining a first offset vector between every two adjacent second images in the P frame second image through the frame interpolation chip to obtain P-1 first offset vectors;
and inserting a Q frame second image into the P frame second image through the frame inserting chip based on the P-1 first offset vectors to obtain the P + Q frame second image.
In an implementation manner of the present application, in determining, by the frame interpolation chip, a first offset vector between every two adjacent second images in the P-frame second image to obtain P-1 first offset vectors, the program includes instructions specifically configured to perform the following steps:
performing Gaussian pyramid decomposition on the P frame second image through the frame interpolation chip to obtain P third image sets;
and determining a first offset vector based on every two adjacent second image sets in the P third image sets through the frame interpolation chip to obtain P-1 first offset vectors.
In an implementation of the present application, in inserting, by the frame interpolation chip, a Q-frame second image into the P-frame second image based on the P-1 first offset vectors, the above program includes instructions specifically configured to:
if the P-1 first offset vectors do not comprise 0 vectors, inserting Q/(P-1) frame second images between every two adjacent frame second images in the P frame second images through the frame inserting chip;
and if the P-1 first offset vectors comprise 0 vectors, determining second images corresponding to non-0 vectors through the frame interpolation chip, and interpolating Q frame second images between the second images corresponding to the non-0 vectors.
In an implementation manner of the present application, before inserting, by the frame interpolation chip, a Q/(P-1) frame second image between every two adjacent frame second images in the P frame second image, the program includes instructions for further performing the following steps:
determining, by the frame interpolation chip, Q second offset vectors based on the P-1 first offset vectors, each first offset vector for determining Q/(P-1) second offset vectors;
and determining Q second images based on the Q second offset vectors through the frame interpolation chip, wherein the Q second images correspond to the Q second offset vectors in a one-to-one mode.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
The above embodiments mainly introduce the scheme of the embodiments of the present application from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
The following is an embodiment of the apparatus of the present application, which is used to execute the method implemented by the embodiment of the method of the present application. Referring to fig. 5, fig. 5 is a schematic structural diagram of a video file playing apparatus provided in an embodiment of the present application, and the apparatus is applied to an electronic device including a central processing unit CPU, a video processing module, a graphics processing unit GPU, a frame insertion chip, and a display screen, and the apparatus includes:
an obtaining unit 501, configured to obtain, by the CPU, a first video file;
a frame dropping unit 502, configured to perform frame dropping processing on the first video file through the video processing module to obtain a second video file if the frame rate of the first video file is greater than or equal to a first threshold, where the frame rate of the second video file is less than the frame rate of the first video file;
a rendering unit 503, configured to perform rendering processing on the second video file through the GPU to obtain a third video file;
the frame interpolation unit 504 is configured to perform frame interpolation processing on the third video file through the frame interpolation chip to obtain a fourth video file, where a frame rate of the fourth video file is greater than a frame rate of the second video file;
a playing unit 505, configured to control the display screen to play the fourth video file through the CPU.
In an implementation manner of the present application, the rendering unit 503 is further configured to perform rendering processing on the first video file through the GPU if the frame rate of the first video file is less than the first threshold, so as to obtain a fifth video file;
the frame interpolation unit 504 is further configured to perform frame interpolation on the fifth video file by the frame interpolation chip to obtain a sixth video file, where a frame rate of the sixth video file is greater than a frame rate of the first video file;
the playing unit 505 is further configured to control the display screen to play the sixth video file through the CPU.
In an implementation manner of the present application, the first video file includes N frames of first images, the second video file includes P frames of first images, and N and P are both integers greater than 1; in the aspect of performing frame reduction processing on the first video file through the video processing module to obtain a second video file, the frame reduction unit 502 is specifically configured to:
equally dividing, by the video processing module, the N first images into (N-P) first image sets, each first image set comprising N/(N-P) first images, the N-P, the N/(N-P) being positive integers;
deleting, by the video processing module, one frame of the first image in each of the first image sets, resulting in (N-P) second image sets, each of which includes [ N/(N-P) ] -1 frame of the first image, the (N-P) second image sets collectively including the P frames of the first image.
In an implementation manner of the present application, the third video file includes a P frame second image, the fourth video file includes a P + Q frame second image, and Q is a positive integer; in terms of performing frame interpolation processing on the third video file by using the frame interpolation chip to obtain a fourth video file, the frame interpolation unit 504 is specifically configured to:
determining a first offset vector between every two adjacent second images in the P frame second image through the frame interpolation chip to obtain P-1 first offset vectors;
and inserting a Q frame second image into the P frame second image through the frame inserting chip based on the P-1 first offset vectors to obtain the P + Q frame second image.
In an implementation manner of the present application, in determining, by the frame interpolation chip, a first offset vector between every two adjacent second images in the P-frame second image to obtain P-1 first offset vectors, the frame interpolation unit 504 is specifically configured to:
performing Gaussian pyramid decomposition on the P frame second image through the frame interpolation chip to obtain P third image sets;
and determining a first offset vector based on every two adjacent third image sets in the P third image sets through the frame interpolation chip to obtain P-1 first offset vectors.
In an implementation manner of the present application, in terms that the frame interpolation chip inserts a Q frame second image into the P frame second image based on the P-1 first offset vectors, the frame interpolation unit 504 is specifically configured to:
if the P-1 first offset vectors do not comprise 0 vectors, inserting Q/(P-1) frame second images between every two adjacent frame second images in the P frame second images through the frame inserting chip;
and if the P-1 first offset vectors comprise 0 vectors, determining second images corresponding to non-0 vectors through the frame interpolation chip, and interpolating Q frame second images between the second images corresponding to the non-0 vectors.
In an implementation manner of the present application, before inserting, by the frame interpolation chip, a Q/(P-1) frame second image between every two adjacent frame second images in the P frame second image, the frame interpolation unit 504 is specifically configured to:
determining, by the frame interpolation chip, Q second offset vectors based on the P-1 first offset vectors, each first offset vector for determining Q/(P-1) second offset vectors;
and determining Q second images based on the Q second offset vectors through the frame interpolation chip, wherein the Q second images correspond to the Q second offset vectors in a one-to-one mode.
It should be noted that the obtaining unit 501, the frame dropping unit 502, the rendering unit 503, the frame inserting unit 504, and the playing unit 505 may be implemented by a processor.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A video file playing method is applied to electronic equipment comprising a Central Processing Unit (CPU), a video processing module, a Graphics Processing Unit (GPU), a frame insertion chip and a display screen, and comprises the following steps:
the CPU acquires a first video file;
if the frame rate of the first video file is greater than or equal to a first threshold value, the video processing module performs frame reduction processing on the first video file to obtain a second video file, wherein the frame rate of the second video file is less than that of the first video file;
the GPU carries out rendering processing on the second video file to obtain a third video file;
the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, wherein the frame rate of the fourth video file is greater than that of the second video file;
and the CPU controls the display screen to play the fourth video file.
2. The method of claim 1, further comprising:
if the frame rate of the first video file is smaller than the first threshold value, the GPU performs rendering processing on the first video file to obtain a fifth video file;
the frame interpolation chip performs frame interpolation processing on the fifth video file to obtain a sixth video file, wherein the frame rate of the sixth video file is greater than that of the first video file;
and the CPU controls the display screen to play the sixth video file.
3. The method according to claim 1 or 2, wherein the first video file comprises N frames of first pictures, the second video file comprises P frames of first pictures, and both N and P are integers greater than 1; the video processing module performs frame dropping processing on the first video file to obtain a second video file, and the method comprises the following steps:
the video processing module equally divides the N first images into (N-P) first image sets, each first image set comprises N/(N-P) first images, and the N-P and the N/(N-P) are positive integers;
the video processing module deletes one frame of the first images in each first image set to obtain (N-P) second image sets, wherein each second image set comprises [ N/(N-P) ] -1 frame of the first images, and the (N-P) second image sets comprise the P frames of the first images in total.
4. The method according to any of claims 1-3, wherein the third video file comprises P frames of the second picture, the fourth video file comprises P + Q frames of the second picture, and Q is a positive integer; the frame interpolation chip performs frame interpolation processing on the third video file to obtain a fourth video file, and the method comprises the following steps:
the frame interpolation chip determines a first offset vector between every two adjacent second images in the P frame second image to obtain P-1 first offset vectors;
and the frame interpolation chip interpolates a Q frame second image in the P frame second image based on the P-1 first offset vectors to obtain the P + Q frame second image.
5. The method of claim 4, wherein the frame interpolation chip determines a first offset vector between every two adjacent second images in the P-frame second images to obtain P-1 first offset vectors, comprising:
the frame interpolation chip carries out Gaussian pyramid decomposition on the P frame second image to obtain P third image sets;
and the frame interpolation chip determines a first offset vector based on every two adjacent third image sets in the P third image sets to obtain P-1 first offset vectors.
6. The method of claim 4 or 5, wherein the frame interpolation chip interpolates Q second pictures in the P second pictures based on the P-1 first offset vectors, comprising:
if the P-1 first offset vectors do not comprise 0 vectors, the frame interpolation chip interpolates Q/(P-1) frame second images between every two adjacent frame second images in the P frame second images;
and if the P-1 first offset vectors comprise 0 vectors, the frame interpolation chip determines second images corresponding to non-0 vectors and inserts Q frame second images among the second images corresponding to the non-0 vectors.
7. The method of claim 6, wherein the frame interpolation chip is configured to interpolate a Q/(P-1) frame second picture between every two adjacent P frame second pictures, and wherein the method further comprises:
the frame interpolation chip determines Q second offset vectors based on the P-1 first offset vectors, wherein each first offset vector is used for determining Q/(P-1) second offset vectors;
and the frame interpolation chip determines Q second images based on the Q second offset vectors, and the Q second images correspond to the Q second offset vectors in a one-to-one mode.
8. A video file playing device is applied to electronic equipment comprising a Central Processing Unit (CPU), a video processing module, a Graphics Processing Unit (GPU), a frame insertion chip and a display screen, and the device comprises:
an acquisition unit configured to acquire a first video file by the CPU;
a frame dropping unit, configured to perform frame dropping processing on the first video file through the video processing module to obtain a second video file if the frame rate of the first video file is greater than or equal to a first threshold, where the frame rate of the second video file is less than the frame rate of the first video file;
the rendering unit is used for rendering the second video file through the GPU to obtain a third video file;
the frame interpolation unit is used for performing frame interpolation processing on the third video file through the frame interpolation chip to obtain a fourth video file, and the frame rate of the fourth video file is greater than that of the second video file;
and the playing unit is used for controlling the display screen to play the fourth video file through the CPU.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1-7.
CN201911261099.XA 2019-12-10 2019-12-10 Video file playing method and related equipment Active CN110996170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911261099.XA CN110996170B (en) 2019-12-10 2019-12-10 Video file playing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911261099.XA CN110996170B (en) 2019-12-10 2019-12-10 Video file playing method and related equipment

Publications (2)

Publication Number Publication Date
CN110996170A true CN110996170A (en) 2020-04-10
CN110996170B CN110996170B (en) 2022-02-15

Family

ID=70091952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911261099.XA Active CN110996170B (en) 2019-12-10 2019-12-10 Video file playing method and related equipment

Country Status (1)

Country Link
CN (1) CN110996170B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813490A (en) * 2020-08-14 2020-10-23 Oppo广东移动通信有限公司 Method and device for processing interpolation frame
CN112565868A (en) * 2020-12-04 2021-03-26 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN113099132A (en) * 2021-04-19 2021-07-09 深圳市帧彩影视科技有限公司 Video processing method, video processing apparatus, electronic device, storage medium, and program product
CN113556495A (en) * 2021-07-21 2021-10-26 北京达佳互联信息技术有限公司 Video recording method, video processing method and corresponding equipment
CN113691756A (en) * 2021-07-15 2021-11-23 维沃移动通信(杭州)有限公司 Video playing method and device and electronic equipment
CN114285958A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, image processing method, and electronic device
CN114500853A (en) * 2022-02-25 2022-05-13 维沃移动通信有限公司 Electronic device and image display method
WO2023125159A1 (en) * 2021-12-28 2023-07-06 维沃移动通信有限公司 Video generation circuits, video generation method, and electronic devices
WO2023125677A1 (en) * 2021-12-28 2023-07-06 维沃移动通信有限公司 Discrete graphics frame interpolation circuit, method, and apparatus, chip, electronic device, and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316360A (en) * 2008-07-17 2008-12-03 杭州华三通信技术有限公司 Method and device for converting video stream frame rate
KR20130123757A (en) * 2012-05-03 2013-11-13 삼성테크윈 주식회사 Method of transmitting and processing video frame
CN104282286A (en) * 2014-10-30 2015-01-14 广东欧珀移动通信有限公司 Method and system for adjusting display refresh rate of smart mobile terminal according to application scene
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
CN107027029A (en) * 2017-03-01 2017-08-08 四川大学 High-performance video coding improved method based on frame rate conversion
CN107079135A (en) * 2016-01-29 2017-08-18 深圳市大疆创新科技有限公司 Method of transmitting video data, system, equipment and filming apparatus
CN107277616A (en) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 Special video effect rendering intent, device and terminal
CN108933952A (en) * 2017-05-26 2018-12-04 中兴通讯股份有限公司 A kind of video broadcasting method, device, computer equipment and computer-readable medium
CN109064528A (en) * 2018-07-18 2018-12-21 深圳市前海手绘科技文化有限公司 A kind of quick application method of MG animation in the short-sighted frequency of Freehandhand-drawing
CN109905717A (en) * 2017-12-11 2019-06-18 四川大学 A kind of H.264/AVC Encoding Optimization based on Space-time domain down-sampling and reconstruction
US10469860B1 (en) * 2018-06-19 2019-11-05 Hulu, LLC Chunk parallel transcoding using overlapping frames

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316360A (en) * 2008-07-17 2008-12-03 杭州华三通信技术有限公司 Method and device for converting video stream frame rate
KR20130123757A (en) * 2012-05-03 2013-11-13 삼성테크윈 주식회사 Method of transmitting and processing video frame
CN104282286A (en) * 2014-10-30 2015-01-14 广东欧珀移动通信有限公司 Method and system for adjusting display refresh rate of smart mobile terminal according to application scene
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
CN107079135A (en) * 2016-01-29 2017-08-18 深圳市大疆创新科技有限公司 Method of transmitting video data, system, equipment and filming apparatus
CN107027029A (en) * 2017-03-01 2017-08-08 四川大学 High-performance video coding improved method based on frame rate conversion
CN108933952A (en) * 2017-05-26 2018-12-04 中兴通讯股份有限公司 A kind of video broadcasting method, device, computer equipment and computer-readable medium
CN107277616A (en) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 Special video effect rendering intent, device and terminal
CN109905717A (en) * 2017-12-11 2019-06-18 四川大学 A kind of H.264/AVC Encoding Optimization based on Space-time domain down-sampling and reconstruction
US10469860B1 (en) * 2018-06-19 2019-11-05 Hulu, LLC Chunk parallel transcoding using overlapping frames
CN109064528A (en) * 2018-07-18 2018-12-21 深圳市前海手绘科技文化有限公司 A kind of quick application method of MG animation in the short-sighted frequency of Freehandhand-drawing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
肖永豪: "《基于视频对象的自适应去帧/插帧视频处理》", 《华南理工大学学报》 *
肖永豪: "《自适应去帧/插帧视频处理》", 《第六届全国计算机应用联合学术会议论文集》 *
黄昆仑,白蔚: "《视频帧率上转换技术综述》", 《数字通信世界》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813490A (en) * 2020-08-14 2020-10-23 Oppo广东移动通信有限公司 Method and device for processing interpolation frame
CN112565868A (en) * 2020-12-04 2021-03-26 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN112565868B (en) * 2020-12-04 2022-12-06 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN113099132A (en) * 2021-04-19 2021-07-09 深圳市帧彩影视科技有限公司 Video processing method, video processing apparatus, electronic device, storage medium, and program product
CN113691756A (en) * 2021-07-15 2021-11-23 维沃移动通信(杭州)有限公司 Video playing method and device and electronic equipment
CN113556495A (en) * 2021-07-21 2021-10-26 北京达佳互联信息技术有限公司 Video recording method, video processing method and corresponding equipment
CN113556495B (en) * 2021-07-21 2022-10-28 北京达佳互联信息技术有限公司 Video recording method, video processing method and corresponding equipment
CN114285958A (en) * 2021-12-28 2022-04-05 维沃移动通信有限公司 Image processing circuit, image processing method, and electronic device
WO2023125159A1 (en) * 2021-12-28 2023-07-06 维沃移动通信有限公司 Video generation circuits, video generation method, and electronic devices
WO2023125677A1 (en) * 2021-12-28 2023-07-06 维沃移动通信有限公司 Discrete graphics frame interpolation circuit, method, and apparatus, chip, electronic device, and medium
CN114500853A (en) * 2022-02-25 2022-05-13 维沃移动通信有限公司 Electronic device and image display method
WO2023160669A1 (en) * 2022-02-25 2023-08-31 维沃移动通信有限公司 Electronic device and image display method

Also Published As

Publication number Publication date
CN110996170B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN110996170B (en) Video file playing method and related equipment
CN111225150B (en) Method for processing interpolation frame and related product
WO2019153671A1 (en) Image super-resolution method and apparatus, and computer readable storage medium
CN108391060B (en) Image processing method, image processing device and terminal
KR20200052846A (en) Data processing systems
US9443281B2 (en) Pixel-based warping and scaling accelerator
KR102327972B1 (en) Projection image construction method and device
US9148564B2 (en) Image pickup apparatus, information processing system and image data processing method
CN103248797A (en) Video resolution enhancing method and module based on FPGA (field programmable gate array)
CN105765968A (en) Frame manipulation to reduce rolling shutter artifacts
CN110012336B (en) Picture configuration method, terminal and device of live interface
US20190244424A1 (en) Methods and apparatus for providing rotated spherical viewpoints
CN110211057A (en) A kind of image processing method based on full convolutional network, device and computer equipment
CN109121000A (en) A kind of method for processing video frequency and client
US20230153958A1 (en) Ai frame engine for mobile edge
US20230300475A1 (en) Image processing method and apparatus, and electronic device
CN114040246A (en) Image format conversion method, device, equipment and storage medium of graphic processor
CN107580228B (en) Monitoring video processing method, device and equipment
US20070133899A1 (en) Triggering an image processing function
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
US20180063551A1 (en) Apparatus and methods for frame interpolation
US20130021371A1 (en) Image display apparatus and image display method
CN113228095A (en) Image processing method, display processing apparatus, and computer-readable storage medium
WO2023185706A1 (en) Image processing method, image processing apparatus and storage medium
CN117278688A (en) Image processing circuit, method and device, chip, electronic equipment and main control chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant