CN111935542A - Video processing method, video playing method, device, equipment and storage medium - Google Patents

Video processing method, video playing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111935542A
CN111935542A CN202010847663.2A CN202010847663A CN111935542A CN 111935542 A CN111935542 A CN 111935542A CN 202010847663 A CN202010847663 A CN 202010847663A CN 111935542 A CN111935542 A CN 111935542A
Authority
CN
China
Prior art keywords
video
block
frames
frame
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010847663.2A
Other languages
Chinese (zh)
Inventor
刘春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202010847663.2A priority Critical patent/CN111935542A/en
Publication of CN111935542A publication Critical patent/CN111935542A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video processing method, a video playing device, video playing equipment and a storage medium, and belongs to the technical field of audio and video processing. The method comprises the following steps: acquiring a video; dividing each video frame in the video according to the dividing parameters to obtain n block frames of each video frame; storing the n block frames of each video frame to obtain the storage position of each block frame; and generating splicing information of the n blocking frames of each video frame, wherein the storage positions and the splicing information of the n blocking frames of the video frame are used for restoring the playing video. Because the resolution of the blocking frame is smaller than that of the video, the consumption of the memory can be reduced in the process of decoding the blocking frame to restore and play the video, and the video with the resolution exceeding the hardware limit can be played. The flexibility of playing the video is improved.

Description

Video processing method, video playing method, device, equipment and storage medium
Technical Field
The present application relates to the field of audio and video processing technologies, and in particular, to a video processing method, a video playing method, an apparatus, a device, and a storage medium.
Background
At present, when an electronic device needs to play a video, a video file of the video is usually loaded first, and the video file is decoded, and then the video is rendered according to information obtained by decoding, so that the video is played.
In the process of implementing the present application, the inventors found that the resolution of the video is higher and higher, for example, the resolution reaches 4096 × 2160 of 4K videos, and the electronic device needs to have higher computing performance to play the video with higher resolution. Specifically, for a video file with a higher resolution, the electronic device needs to consume more memory to decode the video file, and when the memory of the electronic device is insufficient, the video cannot be played. In addition, when the resolution of the video to be played exceeds the resolution supported by the electronic device, the video cannot be played, and the flexibility of playing the video is low.
Disclosure of Invention
The application provides a video processing method, a video playing device, video playing equipment and a storage medium, which can break through hardware limitation and play videos with resolution ratios exceeding the hardware limitation. The technical scheme is as follows:
according to an aspect of the present application, there is provided a video processing method, the method including:
acquiring a video;
dividing each video frame in the video according to a division parameter to obtain n block frames of each video frame, wherein n is an integer greater than 1;
storing the n block frames of each video frame to obtain the storage position of each block frame;
and generating splicing information of the n blocking frames of each video frame, wherein the storage positions of the n blocking frames of the video frames and the splicing information are used for restoring and playing the video.
According to another aspect of the present application, there is provided a video playing method, including:
acquiring storage positions and splicing information of n block frames of a video frame, wherein the block frames are obtained by dividing each video frame in a video, and n is an integer greater than 1;
reading n block frames of the video frame according to the storage position;
splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video;
and playing the video frame of the video.
According to another aspect of the present application, there is provided a video processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring a video;
the segmentation module is used for segmenting each video frame in the video according to segmentation parameters to obtain n block frames of each video frame, wherein n is an integer greater than 1;
the storage module is used for storing the n block frames of each video frame to obtain the storage position of each block frame;
and the generating module is used for generating splicing information of the n blocked frames of each video frame, and the storage positions of the n blocked frames of the video frames and the splicing information are used for restoring and playing the video.
Optionally, the segmentation parameter comprises a number of partitions; the segmentation module is configured to:
determining a block position of each block frame in the video frame according to the number of blocks and the resolution of the video frame, wherein the block position is used for indicating the position of each block frame in the corresponding video frame;
and dividing each video frame in the video according to the block position of each block frame to obtain n block frames of each video frame.
Optionally, the decoding time stamp of the video frame is the same as the decoding time stamp of the n block frames of the video frame; the memory module includes:
a first numbering sub-module, configured to number n block frames of each video frame according to an order of the decoding timestamps and a block position corresponding to each block frame, where the number is used to indicate the block position to which the block frame belongs and an order in which the block frame is decoded;
and the first storage submodule is used for storing the n block frames of each video frame into a target file according to the sequence of the numbers, and the storage position of the target file is the storage position of each block frame.
Optionally, the number includes a timestamp number and a location number, and the first numbering sub-module is configured to:
for any one block frame of each video frame, generating the timestamp number according to the decoding timestamp of the block frame;
generating the position number according to the block position corresponding to the block frame;
and splicing the timestamp numbers and the position numbers to obtain the numbers of the block frames.
Optionally, the generating module includes:
and the first generation submodule is used for generating corresponding information of the position number and the block position of each block frame to obtain the splicing information.
Optionally, the decoding time stamp of the video frame is the same as the decoding time stamp of the n block frames of the video frame; the memory module includes:
a second numbering submodule, configured to number the n block frames of each video frame according to the same numbering rule according to the block positions corresponding to the block frames, where the numbering is used to indicate the block positions to which the block frames belong;
and the second storage submodule is used for storing the n block frames belonging to the same decoding time stamp at the same position, and the same position is the storage position of the block frame.
Optionally, the generating module includes:
and the second generation submodule is used for generating corresponding information of the serial number of each block frame and the block position and corresponding information of the storage position and the decoding time stamp of the block frame to obtain the splicing information.
According to another aspect of the present application, there is provided a video playback apparatus, the apparatus including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the storage positions and splicing information of n block frames of video frames, the block frames are obtained by dividing each video frame in a video, and n is an integer larger than 1;
a reading module, configured to read n block frames of the video frame according to the storage location;
the splicing module is used for splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video;
and the playing module is used for playing the video frame of the video.
Optionally, the storage location includes a storage location of a target file, where the block frames stored according to numbers are stored in the target file, and the reading module includes:
and the first reading submodule is used for reading the n block frames from the target file according to the serial numbers to obtain the n block frames of the video frame.
Optionally, the number comprises a timestamp number; the first reading submodule is used for:
and reading the n block frames from the target file according to the timestamp numbers.
Optionally, the number further includes a position number, and the splicing information includes corresponding information of the position number and a blocking position, where the blocking position is used to indicate a position of each of the blocking frames in the corresponding video frame; the splicing module comprises:
a first determining submodule, configured to determine the blocking position of each of the n blocking frames of the video frame according to the position number of the blocking frame and the splicing information;
and the first rendering submodule is used for rendering the n blocking frames to the blocking positions corresponding to the blocking frames respectively to obtain the video frames of the video.
Optionally, the splicing information includes information corresponding to decoding timestamps of the n block frames stored in the storage location, where the decoding timestamps of the n block frames are the same; the reading module comprises:
and the second reading submodule is used for reading the n block frames of the video frame from the storage position according to the splicing information.
Optionally, each of the block frames corresponds to a number, and the splicing information further includes corresponding information between the number of each of the block frames and a block position, where the block position is used to indicate a position of each of the block frames in the corresponding video frame; the splicing module comprises:
a second determining submodule, configured to determine the blocking position of each of the n blocking frames of the video frame according to the number of the blocking frame and the splicing information;
and the second rendering submodule is used for rendering the n blocking frames to the blocking positions corresponding to the blocking frames respectively to obtain the video frames of the video.
According to another aspect of the present application, there is provided an electronic device comprising a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video processing method or the video playing method according to the above aspect.
According to another aspect of the present application, there is provided a computer storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which when loaded and executed by a processor of an electronic device, implements the video processing method or the video playing method of the above aspect.
The beneficial effect that technical scheme that this application provided brought includes at least:
the method comprises the steps of dividing each video frame of a video into n blocking frames, storing the n blocking frames of each video frame to obtain the storage positions of the blocking frames, and generating splicing information of the n blocking frames of each video frame. And restoring the playing video according to the storage position and the splicing information. Because the resolution of the blocking frame is smaller than that of the video, the consumption of the memory can be reduced in the process of decoding the blocking frame to restore and play the video, and the video with the resolution exceeding the hardware limit can be played. The flexibility of playing the video is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a video playing system according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a principle of segmenting a video frame and playing back the video frame according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a video playing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another video playing method provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of an implementation process for segmenting each video frame in a video according to an embodiment of the present application;
fig. 7 is a schematic diagram of an implementation process for storing n block frames of each video frame according to an embodiment of the present application;
fig. 8 is a schematic diagram of an implementation process for numbering n block frames of each video frame according to an order of decoding timestamps and a corresponding block position of each block frame, provided by an embodiment of the present application;
fig. 9 is a schematic diagram of another implementation process for storing n block frames of each video frame according to an embodiment of the present application;
fig. 10 is a schematic diagram of an implementation process for reading n block frames of a video frame according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an implementation process for reading n block frames from a target file according to numbers according to an embodiment of the present application;
fig. 12 is a schematic diagram of another implementation process for reading n block frames of a video frame according to an embodiment of the present application;
fig. 13 is a schematic diagram of an implementation process for splicing n block frames of a video frame according to splicing information according to an embodiment of the present application;
fig. 14 is a schematic diagram of another implementation process for splicing n block frames of a video frame according to splicing information according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
FIG. 16 is a schematic structural diagram of a memory module according to an embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of a generating module according to an embodiment of the present disclosure;
FIG. 18 is a schematic structural diagram of another memory module provided in an embodiment of the present application;
FIG. 19 is a schematic structural diagram of another generation module provided in the embodiments of the present application;
fig. 20 is a schematic structural diagram of a video playback device according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of a read module according to an embodiment of the present disclosure;
FIG. 22 is a schematic structural diagram of a splicing module provided in an embodiment of the present application;
FIG. 23 is a schematic structural diagram of another read module provided in an embodiment of the present application;
FIG. 24 is a schematic structural diagram of another splice module provided in embodiments of the present application;
fig. 25 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 26 is a schematic structural diagram of a server according to an embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a video playing system according to an exemplary embodiment of the present application. Optionally, the video processing device and the video playing device are the same computer device, or two computer devices. Taking a video processing device and a video playing device as two computer devices as an example, as shown in fig. 1, the system includes: a video processing device 110 and a video playback device 120.
Alternatively, the video processing device 110 is a smart phone, a computer, a television, a multimedia player, an e-reader, and the like. Alternatively, the video processing device is a server, or a server cluster composed of a plurality of servers, or a virtual server in a cloud computing service center, and the like, which is not limited herein. The video playing device 120 is a smart phone, a computer, a television, a multimedia player, an e-reader, etc. Alternatively, the connection between the video processing device 110 and the video playback device 120 is established through a wired network or a wireless network.
It should be noted that the video processing device 110 is provided with a video processing client, and the video playing device 120 is provided with a video playing client. The video processing device 110 is connected to the video playing client on the video playing device 120 through the video processing client. The video processing client on the video processing device and the video playing client on the video playing device may be the same or different.
Fig. 2 is a schematic diagram providing a principle of segmenting a video frame and playing back the video frame according to an embodiment of the present application. As shown in fig. 2, when the video playing client 202 plays a video, the video playing client 202 cannot play the video because the resolution of the video exceeds the resolution limited by the video playing device or the memory consumed for playing the video is too much. The video playing device is installed with the video playing client 202. At this time, the video playing client 202 sends the video to the video processing client 201.
The video processing client 201 determines the position of the segmented frame segmented from each video frame in the video in the corresponding video frame, i.e. the position of the segmented frame, according to the number of the segmented blocks in the segmentation parameters and the resolution of the video frame. And dividing each video frame in the video into n block frames according to the block position. Then, the video processing client 201 stores the n block frames of each video frame, and obtains the storage location of each block frame. And generating splicing information of the n block frames of each video frame. The storage location and the splicing information are used for restoring and playing the video. The storage location is used for the video playing client 202 to obtain n blocked frames corresponding to the video frame, and the splicing information is used for the video playing client 202 to splice the n blocked frames corresponding to the obtained video frame, so as to obtain the video frame of the video. Optionally, the video processing client 201 also sends the storage location and the splicing information to the video playing client 202.
Illustratively, the decoding timestamp of the 1 st frame of video 203 is 0 and the resolution is 4096 × 2160. The video processing client divides the video frame 203 according to the number of the divided blocks in the division parameter and the resolution of the video frame 203. The number of the blocks is 4, and the positions of the block frames segmented from the video frame 203 in the video frame 203 determined by the video processing client 201 are position 1, position 2, position 3, and position 4, respectively. The video processing client 201 divides the video frame 203 into a first blocked frame 203a, a second blocked frame 203b, a third blocked frame 203c and a fourth blocked frame 203d according to position 1, position 2, position 3 and position 4. The decoding timestamp of the first block frame 203a is 0, and the resolution is 2048 × 1080, which corresponds to position 1. The decoding timestamp of the second block frame 203b is 0, and the resolution is 2048 × 1080, corresponding to position 2. The third segmented frame 203c has a decoding timestamp of 0 and a resolution of 2048 × 1080, corresponding to position 3. The fourth macroblock frame 203d has a decoding timestamp of 0 and a resolution of 2048 × 1080, corresponding to position 4. The video processing client 201 divides each video frame of the video into 4 block frames according to the above method. And storing the n blocked frames of each video frame, and generating splicing information of the n blocked frames of each video frame.
The video playing client 202 obtains the storage location and the splicing information of n partitioned frames of each video frame of the video. And reading n block frames of the video frame according to the storage position. And splicing the n blocked frames of the video frame into the video frame of the video according to the splicing information. And then playing the video frame of the video to realize the reduction playing of the video.
Illustratively, with reference to fig. 2, the video playing client 202 reads the first blocking frame 203a, the second blocking frame 203b, the third blocking frame 203c, and the fourth blocking frame 203d of the video frame 203 according to the obtained storage location and the obtained splicing information. The first blocked frame 203a, the second blocked frame 203b, the third blocked frame 203c and the fourth blocked frame 203d are spliced into the video frame 203 according to the splicing information, and the video frame 203 is played. The video playing client 202 can read n block frames of each video frame of the video according to the above method, and splice the n block frames of each video frame to obtain each video frame of the video, and play each video frame can realize the reduction playing of the video.
Dividing each video frame of the video into n block frames, storing the n block frames of each video frame to obtain the storage positions of the block frames, and generating the splicing information of the n block frames of each video frame. And restoring the playing video according to the storage position and the splicing information. Because the resolution of the blocking frame is smaller than that of the video, the consumption of the memory can be reduced in the process of decoding the blocking frame to restore and play the video, and the video with the resolution exceeding the hardware limit can be played. The flexibility of playing the video is improved.
Fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present application. The method may be used for a video processing device or a video processing client on a video processing device in a system as shown in fig. 1. As shown in fig. 3, the method includes:
and 301, acquiring a video.
The resolution of the video exceeds the resolution limited by the video playing device, or the memory consumed by playing the video is larger than the memory of the video playing device. For example, the maximum video resolution of a video that the video playing device supports playing is 2K resolution, and the video is a video with 4K resolution or a video with more than 4K resolution. And the video processing client receives the video sent by the video playing client so as to acquire the video. Alternatively, the video processing client obtains the video from the video processing device.
Step 302, segmenting each video frame in the video according to the segmentation parameters to obtain n block frames of each video frame.
Wherein n is an integer greater than 1. The resolution of the blocking frame is less than the resolution of the video frame corresponding to the blocking frame. There is no overlap between any two of the n block frames. The decoding time stamps of the n block frames are the same as the decoding time stamps of the video frames corresponding to the n block frames. Optionally, the resolutions of the n block frames of the video frame are all the same or different.
Optionally, the segmentation parameter comprises a number of partitions. When the video processing client divides each video frame in the video, the number of the blocks is used for indicating the number of the block frames divided by the video frame. The video processing client can divide the video frame according to the number of the blocks and the resolution of the video frame needing to be divided. Illustratively, with continued reference to fig. 2, the video processing client 201 equally divides the video frame 203 by the number of partitions, resulting in 4 partitioned frames of the video frame 203.
Optionally, the segmentation parameters include a segmentation location. The block position is used to indicate the position of each block frame segmented from the video frame in the video frame. Different videos correspond to different segmentation parameters. Illustratively, with continued reference to fig. 2, the blocking position includes information of position 1, information of position 2, information of position 3, and information of position 4, according to which the video processing client 201 divides the video frame 203 into 4 blocking frames.
And 303, storing the n block frames of each video frame to obtain the storage position of each block frame.
The storage location is used for acquiring n block frames corresponding to each video frame of the video. Optionally, the video processing client stores n blocked frames belonging to the same decoding timestamp in the same storage location. For example, n block frames belonging to the same decoding time stamp are stored in the same file in the order of the decoding time stamp. Or, n block frames belonging to the same decoding time stamp are stored under the same data index.
Optionally, the video processing client only includes video picture data of the video frame in n block frames of the video frame stored. The video processing client also separately generates an audio file containing audio data of the video frames for providing sound when the video is reproduced. Or the video processing client stores the audio data corresponding to the video frame in any one of the n block frames of the video frame.
And 304, generating splicing information of the n blocking frames of each video frame, wherein the storage positions and the splicing information of the n blocking frames of the video frame are used for restoring the playing video.
The splicing information is used for splicing n block frames of the video frame, so that the video frame is obtained. Optionally, the splicing information includes at least one of correspondence information between each block frame and a block location, and correspondence information between a storage location and a decoding timestamp of the n block frames stored in the storage location. The concatenation information can also include a chunking location.
In summary, in the video processing method provided in the embodiment of the present application, each video frame of a video is divided into n block frames, the n block frames of each video frame are stored to obtain the storage positions of the block frames, and the splicing information of the n block frames of each video frame is generated. And restoring the playing video according to the storage position and the splicing information. Because the resolution of the blocking frame is smaller than that of the video, the consumption of the memory can be reduced in the process of decoding the blocking frame to restore and play the video, and the video with the resolution exceeding the hardware limit can be played. The flexibility of playing the video is improved.
Fig. 4 is a schematic flowchart of a video playing method according to an embodiment of the present application. The method can be used for a video playing device or a video playing client on the video playing device in the system shown in fig. 1. As shown in fig. 4, the method includes:
step 401, obtaining storage locations and splicing information of n block frames of the video frames, where the block frames are obtained by dividing each video frame in the video.
Wherein n is an integer greater than 1. The storage position and the splicing information are used for restoring and playing the video according to n blocked frames of the video. The storage position is used for the video playing client to acquire n blocked frames corresponding to the video frames, and the splicing information is used for the video playing client to splice the n blocked frames corresponding to the acquired video frames so as to acquire the video frames of the video.
Optionally, the video playing client receives the storage location and the splicing information sent by the video processing client, so as to obtain the storage location and the splicing information. And the video playing client receives the storage position sent by the video processing client and acquires splicing information from the video playing equipment. Or the video processing client acquires the storage position and the splicing information from the video playing device.
Step 402, reading n block frames of the video frame according to the storage position.
The storage position refers to a storage position of a target file in which n block frames are stored, and the video playing client reads the n block frames from the target file according to the storage position. Or, the storage location also refers to a data index storing n blocked frames, and the video playing client reads the n blocked frames according to the storage location pointed by the data index.
And 403, splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video.
Optionally, the splicing information includes corresponding information of the n block frames and block positions. The block position is used for indicating the position of n block frames divided from the video frame in the video frame. The splicing information further includes the position of the block, or the position of the block is stored in the video playing device. The video playing client can determine the position of each block frame in the corresponding video frame according to the n block frames of the obtained video frames and the splicing information, so that the n block frames of the video frames can be spliced into the video frame according to the position, and further all video frames of the video needing to be played back can be obtained.
Illustratively, with continued reference to fig. 2, the video playing client reads the first blocked frame 203a, the second blocked frame 203b, the third blocked frame 203c and the fourth blocked frame 203d of the video frame 203 according to the storage location. Then, the video processing client determines the positions of the first blocked frame 203a, the second blocked frame 203b, the third blocked frame 203c and the fourth blocked frame 203d in the video frame 203 according to the splicing information, and splices the first blocked frame 203a, the second blocked frame 203b, the third blocked frame 203c and the fourth blocked frame 203d into the video frame 203 according to the positions. Optionally, the block position refers to a coordinate of an upper left corner of the block frame in the corresponding video.
Step 404, playing video frames of the video.
The video frame of the video is obtained by splicing the video playing client according to the n block frames of the read video frame. The video playing client plays all video frames of the video, so that the video is restored and played.
In summary, the video playing method provided in the embodiment of the present application obtains the storage positions and the splicing information of the n block frames of the video frame of the video, and can splice the n block frames of the video frame according to the storage positions and the splicing information to obtain the video frame, thereby implementing the playback of the video. Because the resolution of the blocking frame is smaller than that of the video, the consumption of the memory can be reduced in the process of decoding the blocking frame to restore and play the video, and the video with the resolution exceeding the hardware limit can be played. The flexibility of playing the video is improved.
Fig. 5 is a schematic flowchart of another video playing method according to an embodiment of the present application. The method may be used in a system as shown in fig. 1. As shown in fig. 5, the method includes:
step 501, the video processing client acquires a video.
The resolution of the video exceeds the resolution limited by the video playing device, or the memory consumed by playing the video is larger than the memory of the video playing device. And the video processing client receives the video sent by the video playing client so as to acquire the video. Alternatively, the video processing client obtains the video from the video processing device.
Step 502, the video processing client divides each video frame in the video according to the division parameters to obtain n block frames of each video frame.
Wherein n is an integer greater than 1. The resolution of the blocking frame is less than the resolution of the video frame corresponding to the blocking frame. There is no overlap between any two of the n block frames. The decoding time stamps of the n block frames are the same as the decoding time stamps of the video frames corresponding to the n block frames. Optionally, the resolutions of the n block frames of the video frame are all the same or different.
The segmentation parameters are set by the video processing client. Optionally, different videos correspond to the same segmentation parameters, or different videos correspond to the same segmentation parameters.
Optionally, the segmentation parameter comprises a number of partitions. As shown in fig. 6, the implementation process of step 502 includes the following steps 5021 and 5022:
in step 5021, the block position of each block frame in the video frame is determined according to the number of blocks and the resolution of the video frame.
The block position is used to indicate where each block frame is located in the corresponding video frame. Optionally, the block position includes coordinates of a vertex at the top left corner of each block frame divided from the video frame in the video frame and a resolution of the block frame. Illustratively, with continued reference to FIG. 2, the tile position includes the coordinates of the top left vertex position 1 in video frame 203 and the resolution of the first tile frame 203a, the coordinates of the top left vertex position 2 in video frame 203 and the resolution of the second tile frame 203b, the coordinates of the top left vertex position 3 in video frame 203 and the resolution of the third tile frame 203c, the coordinates of the top left vertex position 4 in video frame 203 and the resolution of the fourth tile frame 203 d. The first, second, third, and fourth blocked frames 203a, 203b, 203c, 203d have the same resolution.
And the video processing client divides the video frame into equal pixel areas according to the number of the blocks and the resolution of the video frame, so as to obtain the block positions corresponding to the n block frames of each video frame of the video. Or the video processing client randomly determines the block positions corresponding to the n block frames of each video frame of the video in the video frames according to the number of the blocks and the resolution of the video frames.
In step 5022, each video frame in the video is segmented according to the segmentation position of each segmentation frame to obtain n segmentation frames of each video frame.
The block position is the position occupied by each block frame divided from the video frame in the video frame. Illustratively, the resolution of a video frame of the video is 4096 × 2160 and the number of blocks is 5. The video processing client divides the video frame of the video into 5 block frames from top to bottom. The resolution of 5 block frames divided from the video frame of the video is 4096 × 432, and the coordinates of the top left corner vertices of the 5 block frames are (0, 0), (0, 432), (0, 864), (0, 1296), and (0, 1728), respectively.
Step 503, the video processing client stores n block frames of each video frame to obtain a storage location of each block frame.
The decoding time stamp of the video frame is the same as the decoding time stamp of the n block frames of the video frame. Optionally, when the video processing client partitions the video frame, a decoding time stamp of the video frame is also written into each block frame of the video frame.
Optionally, as shown in fig. 7, the implementation process of step 503 includes the following steps 5031a and 5032 a:
in step 5031a, n block frames of each video frame are numbered according to the sequence of the decoding timestamps and the corresponding block positions of each block frame, where the numbering is used to indicate the block positions to which the block frames belong and the sequence of decoding the block frames.
The video processing client side numbers the n block frames of each video frame according to the sequence of the decoding timestamps and the block positions corresponding to the block frames, and is used for determining the n block frames corresponding to the video frames of the video and the block positions corresponding to the block frames according to the sequence numbers when the video playing client side plays the video in a restoring mode.
Illustratively, with continued reference to fig. 2, the video processing client numbers the first, second, third, and fourth blocked frames 203a, 203b, 203c, 203d that are separated from the 1 st frame of video 203 of the video. The first blocked frame 203a is numbered 1, the second blocked frame 203b is numbered 2, the third blocked frame 203c is numbered 3, and the fourth blocked frame 203d is numbered 4. And then the video processing client side numbers the block frames of the 2 nd frame video frame of the video according to the sequence of the decoding time stamps. The video processing client segments the 2 nd frame video frame in the same manner as the video frame 203. The number of the upper left block frame, the number of the upper right block frame, the number of the lower left block frame, and the number of the lower right block frame, which are divided from the 2 nd frame video frame, are 5, 6, 7, and 8, respectively. And by analogy, the video processing client numbers n partitioned frames of each video frame of the video. The number of the block frame belonging to position 1 is 4m +1, the number of the block frame belonging to position 2 is 4m +2, the number of the block frame belonging to position 3 is 4m +3, and the number of the block frame belonging to position 4 is 4m + 4. m is a natural number.
Optionally, the number includes a timestamp number and a location number. As shown in fig. 8, the implementation process of step 5031a includes the following steps 5031a1 to 5031a 3:
in step 5031a1, for any one of the block frames of each video frame, a timestamp number is generated from the decoding timestamp of the block frame.
Optionally, the time stamp numbers of the blocking frames corresponding to the same decoding time stamp are the same. And the video processing client determines the number of bits of the timestamp number according to the number of video frames included in the video. E.g., the video includes 200 video frames, the video processing client determines that the timestamp number is a 3-bit number. Continuing with fig. 1, the timestamps of the first, second, third, and fourth segmented frames 203a, 203b, 203c, 203d are numbered 001.
In step 5031a2, a position number is generated from the tile position corresponding to the tile frame.
Alternatively, the position numbers of the partition frames corresponding to the same partition position are the same. And the video processing client determines the number of the position number according to the number of the blocks. Illustratively, with reference to fig. 2, among the block frames divided from all the video frames of the video, the block frame corresponding to the position 1 has the position number 1, the block frame corresponding to the position 2 has the position number 2, the block frame corresponding to the position 3 has the position number 3, and the block frame corresponding to the position 4 has the position number 4.
In step 5031a3, the timestamp numbers and the position numbers are concatenated to obtain the numbers of the blocked frames.
And the video processing client takes the timestamp number as the high order of the number of the block frame and takes the position number as the low order of the number of the block frame for splicing. Or the video processing client takes the timestamp number as the lower bit of the number of the block frame and takes the position number as the upper bit of the number of the block frame for splicing. Optionally, the video processing client may further add characters between the timestamp numbers and the position numbers to separate the timestamp numbers from the position numbers in the numbers of the block frames. Illustratively, with continued reference to fig. 2, the timestamp number of the first block frame is 001, the location number is 1, and the number of the first block frame is 0011.
In step 5032a, n block frames of each video frame are stored in the target file in the order of their numbers, where the storage location of the target file is the storage location of each block frame.
The video processing client stores the n block frames of each video frame into the target file according to the sequence of the numbers, and is used for reading the block frames stored in the target file in sequence in the target file and splicing to obtain the video frames of the video when the video playing client restores and plays the video. The storage location of the target file refers to the index address of the target file.
Illustratively, with continued reference to fig. 2, the first block frame 203a is numbered 0011, the second block frame 203b is numbered 0012, the third block frame 203c is numbered 0013, and the fourth block frame 203d is numbered 0014. Then, in the sequence of the blocked frames stored in the target file, the first blocked frame is the first blocked frame 203a, the second blocked frame is the second blocked frame 203b, the third blocked frame is the third blocked frame 203c, the fourth blocked frame is the fourth blocked frame 203d, and the fifth blocked frame is the upper left-hand blocked frame in the 2 nd frame video frame.
Optionally, as shown in fig. 9, the implementation process of step 503 includes the following steps 5031b and 5032 b:
in step 5031b, the n block frames of each video frame are numbered according to the same numbering rule according to the corresponding block positions of the block frames, where the numbering is used to indicate the block positions to which the block frames belong.
The video processing client side numbers the n block frames of each video frame according to the same numbering rule, and the position numbers of the block frames corresponding to the same block position can be the same.
In step 5032b, n block frames belonging to the same decoding time stamp are stored at the same position, which is the storage position of the block frame.
The video processing client stores the n block frames belonging to the same decoding time stamp at the same position, and is used for splicing the read block frames to obtain the video frames of the video when reading the block frames at a certain storage position. The same position refers to a data start position and a data end position of data stored in the memory for n block frames corresponding to the same decoding time stamp.
Step 504, the video processing client generates splicing information of the n block frames of each video frame, and the storage positions and the splicing information of the n block frames of the video frame are used for restoring the playing video.
The splicing information is used for splicing n block frames corresponding to the video frame, so that the video frame of the video is obtained.
Optionally, when the implementation process of step 503 includes step 5031b and step 5032b, the video processing client generates corresponding information between the position number of each block frame and the block position to obtain the splicing information. Illustratively, with continued reference to fig. 2, the video processing client generates splicing information indicating that the first block frame 203a with position number 1 corresponds to position 1, the second block frame 203b with position number 2 corresponds to position 2, the third block frame 203c with position number 3 corresponds to position 3, and the fourth block frame 204b with position number 4 corresponds to position 4.
Alternatively, when the implementation process of step 503 includes steps 5031a1 to 5031a3, the video processing client generates correspondence information of the number and the blocking position of each blocking frame and correspondence information of the storage position and the decoding time stamp of the blocking frame, resulting in splicing information. The storage position and the corresponding information of the decoding time stamps of the block frames are used for reading the block frames of the video frames according to the sequence of the decoding time stamps of the block frames, so that the block frames are spliced according to the sequence of the decoding time stamps to obtain the video frames, and then the video is restored and played according to the sequence of the decoding time stamps of the video frames.
And 505, the video processing client sends the storage position and the splicing information to the video playing client.
The storage position and the splicing information are used for restoring and playing the video according to n blocked frames of the video. The storage position is used for the video playing client to acquire n blocked frames corresponding to the video frames, and the splicing information is used for the video playing client to splice the n blocked frames corresponding to the acquired video frames so as to acquire the video frames of the video.
Optionally, the video playing client receives the storage location and the splicing information sent by the video processing client, so as to obtain the storage location and the splicing information. And the video playing client receives the storage position sent by the video processing client and acquires splicing information from the video playing equipment. Or the video processing client acquires the storage position and the splicing information from the video playing device.
Step 506, the video playing client reads n block frames of the video frames according to the storage position.
Optionally, the storage location comprises a storage location of the target file. The target file stores the block frames stored according to the numbers. As shown in fig. 10, the implementation of step 506 includes the following steps 5061 a:
in step 5061a, n block frames are read from the target file according to the numbers, resulting in n block frames of the video frame.
The n block frames in the target file are numbered according to the sequence of the decoding time stamps of the block frames and the block positions corresponding to the block frames. The video playing client can determine the sequence of the video frames corresponding to the blocking frames in all the video frames of the video according to the numbers of the blocking frames. For example, the numbers of 4 block frames read from the target file by the video playing client are 1, 2, 3, and 4, respectively, and then the video frame corresponding to the 4 block frames is the 1 st frame video frame in all the video frames of the video. The numbers of the 4 blocking frames read from the target file by the video playing client are 5, 6, 7 and 8 respectively, and then the video frame corresponding to the 4 blocking frames is the 2 nd frame video frame in all the video frames of the video.
Optionally, the number comprises a timestamp number. As shown in fig. 11, the implementation of step 5061a includes the following steps 5061a 1:
in step 5061a1, n blocked frames are read from the target file according to the timestamp number.
Optionally, the time stamp numbers of the blocking frames corresponding to the same decoding time stamp are the same. The video playing client can determine the sequence of the video frames corresponding to the blocking frames in all the video frames of the video according to the read time stamp numbers of the blocking frames. For example, if there are 4 block frames with timestamp number 001 among 100 block frames read by the video playing client, the 4 block frames with timestamp number 001 correspond to the 1 st video frame in all video frames of the video.
Optionally, the splicing information includes corresponding information of decoding timestamps of the n block frames stored in the storage location, and the decoding timestamps of the n block frames are the same. As shown in fig. 12, the implementation of step 506 includes the following steps 5061 b:
in step 5061b, n blocked frames of the video frame are read from the storage location according to the splicing information.
And the video playing client can read the n blocked frames stored in the storage position according to the storage position. According to the splicing information, the video playing client can also determine the decoding time stamps of the n blocked frames. The decoding time stamp of the video frame is the same as the decoding time stamp of the blocking frame of the video frame, that is, the video playing client can determine the video frame corresponding to the blocking frame according to the decoding time stamp of the blocking frame.
And 507, splicing the n blocked frames of the video frame by the video playing client according to the splicing information to obtain the video frame of the video.
When the implementation of step 506 includes step 5061a1, optionally, the number of the blocking frame further includes a position number, and the splicing information includes corresponding information of the position number and the blocking position. As shown in fig. 13, the implementation of step 507 includes the following steps 5071a and 5072 a:
in step 5071a, a block position of each of the n block frames of the video frame is determined according to the position number of the block frame and the splicing information.
The block position is used to indicate where each block frame is located in the corresponding video frame. The video playing client can determine the positions occupied by the n block frames in the corresponding video frames according to the read position numbers of the n block frames of the video frames. Illustratively, with continued reference to fig. 2, the position numbers of the blocking frames of the video frame 203 read by the video playing client are 1, 2, 3, and 4, respectively. According to the splicing information, the video playing client can determine that the position 1 corresponds to the blocking frame with the position number of 1, the position 2 corresponds to the blocking frame with the position number of 2, the position 3 corresponds to the blocking frame with the position number of 3, and the position 4 corresponds to the blocking frame with the position number of 4.
In step 5072a, the n block frames are rendered to block positions corresponding to the block frames, respectively, to obtain video frames of the video.
By way of example, continuing with fig. 2, the video playing client renders the blocking frame corresponding to position 1 of the video frame 203 to position 1, renders the blocking frame corresponding to position 2, renders the blocking frame corresponding to position 3, renders the blocking frame corresponding to position 4, and can obtain the video frame 203.
When the implementation of step 506 includes 5061b, each of the segmented frames is optionally numbered accordingly. The splicing information further includes corresponding information of the number of each block frame and a block position, and the block position is used for indicating the position of each block frame in the corresponding video frame. As shown in fig. 14, the implementation process of step 507 includes the following steps 5071b and 5072 b:
in step 5071b, a block location of each of the n block frames of the video frame is determined based on the number of the block frame and the splicing information.
The video playing client can determine the positions occupied by the n blocking frames in the corresponding video frames according to the read n blocking frames of the video frames and the splicing information.
In step 5072b, the n block frames are rendered to block positions corresponding to the block frames, respectively, to obtain video frames of the video.
And step 508, the video playing client plays the video frame of the video.
The video frame of the video is obtained by splicing the video playing client according to the read n blocking frames of the video frame. The video playing client can splice all video frames of the video according to all the read blocked frames, and the video can be restored and played by playing all the video frames. Optionally, when the video playing client plays the video frame of the video, the audio file corresponding to the video frame of the video is also played.
The steps executed by the video processing client can be independently realized to be a video processing method at the video processing client side; the steps executed by the video playing client can be independently realized to be a video playing method at the video playing client side.
In summary, in the video playing method provided in the embodiment of the present application, each video frame of a video is divided into n block frames, the n block frames of each video frame are stored to obtain the storage positions of the block frames, and the splicing information of the n block frames of each video frame is generated. And restoring the playing video according to the storage position and the splicing information. Because the resolution of the blocking frame is smaller than that of the video, the consumption of the memory can be reduced in the process of decoding the blocking frame to restore and play the video, and the video with the resolution exceeding the hardware limit can be played. The flexibility of playing the video is improved.
In addition, the partitioned frames divided from the video frame are numbered, and the splicing information of the partitioned frames is generated. The process of restoring and playing the video can be simplified. The video processing client sends the storage position and the splicing information to the video playing client, so that the data transmission amount in the process of restoring and playing the video can be reduced, and the efficiency is improved.
It should be noted that, the order of the steps of the method provided in the embodiments of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by those skilled in the art within the technical scope disclosed in the present application shall be covered by the protection scope of the present application, and therefore, the detailed description thereof is omitted.
Fig. 15 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. The apparatus may be used in a video processing device or a video processing client on a video processing device in a system as shown in fig. 1. As shown in fig. 15, the apparatus 150 includes:
an obtaining module 1501 is configured to obtain a video.
A dividing module 1502 is configured to divide each video frame in the video according to the dividing parameters to obtain n block frames of each video frame, where n is an integer greater than 1.
The storage module 1503 is configured to store n block frames of each video frame to obtain a storage location of each block frame.
A generating module 1504, configured to generate splicing information of the n block frames of each video frame, where the storage locations and the splicing information of the n block frames of the video frame are used to restore the played video.
Optionally, the segmentation parameter comprises a number of partitions. A segmentation module 1502 to:
and determining the block position of each block frame in the video frame according to the number of the blocks and the resolution of the video frame. The block position is used to indicate where each block frame is located in the corresponding video frame.
And according to the blocking position of each blocking frame, each video frame in the video is divided to obtain n blocking frames of each video frame.
Optionally, the decoding time stamp of the video frame is the same as the decoding time stamp of the n block frames of the video frame. As shown in fig. 16, the storage module 1503 includes:
the first numbering sub-module 15031 is configured to number the n block frames of each video frame according to the sequence of the decoding timestamps and the block positions corresponding to each block frame, where the numbering is used to indicate the block positions to which the block frames belong and the sequence of decoding the block frames.
The first storage sub-module 15032 is configured to store the n block frames of each video frame into the target file according to the sequence of numbers, where the storage location of the target file is the storage location of each block frame.
Optionally, the number includes a timestamp number and a location number. A first numbering submodule 15031 for:
for any one of the block frames of each video frame, a timestamp number is generated from a decoding timestamp of the block frame.
And generating a position number according to the block position corresponding to the block frame.
And numbering the timestamps and the positions to obtain the numbers of the block frames through splicing.
Optionally, as shown in fig. 17, the generating module 1504 includes:
the first generating sub-module 15041 is configured to generate corresponding information between the position number of each block frame and the block position, so as to obtain splicing information.
Optionally, the decoding time stamp of the video frame is the same as the decoding time stamp of the n block frames of the video frame. As shown in fig. 18, the storage module 1503 includes:
the second numbering sub-module 15033 is configured to number the n block frames of each video frame according to the same numbering rule according to the corresponding block positions of the block frames. The number is used to indicate the location of the block to which the block frame belongs.
A second storing module 15034, configured to store the n block frames belonging to the same decoding timestamp at the same location, where the same location is a storage location of the block frame.
Optionally, as shown in fig. 19, the generating module 1504 includes:
the second generating sub-module 15042 is configured to generate corresponding information of the number and the blocking position of each blocking frame, and corresponding information of the storage position and the decoding timestamp of the blocking frame, so as to obtain the splicing information.
Fig. 20 is a schematic structural diagram of a video playback device according to an embodiment of the present application. The apparatus can be used for a video playing device or a video playing client on the video playing device in the system shown in fig. 1. As shown in fig. 20, the apparatus 200 includes:
an obtaining module 2001, configured to obtain storage locations and splicing information of n block frames of a video frame. The block frame is obtained by dividing each video frame in the video. n is an integer greater than 1.
The reading module 2002 is configured to read n block frames of the video frame according to the storage location.
And the splicing module 2003 is configured to splice the n block frames of the video frame according to the splicing information to obtain a video frame of the video.
A playing module 2004 is used for playing the video frames of the video.
Optionally, the storage location includes a storage location of a target file, and the target file stores therein the blocked frames stored according to the numbers. As shown in fig. 21, the reading module 2002 includes:
the first reading sub-module 20021 is configured to read n block frames from the target file according to the serial number to obtain n block frames of the video frame.
Optionally, the number comprises a timestamp number. A first reading submodule 20021 for:
and reading n block frames from the target file according to the timestamp numbers.
Optionally, the number further includes a position number, and the splicing information includes information corresponding to the position number and a block position, where the block position is used to indicate a position of each block frame in the corresponding video frame. As shown in fig. 22, the splicing module 2003 includes:
the first determining submodule 20031 is configured to determine a blocking position of each of n blocking frames of the video frame according to the position number of the blocking frame and the splicing information.
The first rendering submodule 20032 is configured to render the n block frames to block positions corresponding to the block frames, respectively, so as to obtain a video frame of the video.
Optionally, the splicing information includes corresponding information of decoding timestamps of the n block frames stored in the storage location, and the decoding timestamps of the n block frames are the same. As shown in fig. 23, the reading module 2002 includes:
the second reading sub-module 20022 is configured to read n block frames of the video frame from the storage location according to the splicing information.
Optionally, each of the blocking frames corresponds to a number, and the splicing information further includes information of the number of each of the blocking frames and a blocking position, where the blocking position is used to indicate a position of each of the blocking frames in the corresponding video frame. As shown in fig. 24, the splicing module 2003 includes:
the second determining submodule 20033 is configured to determine a blocking position of each of the n blocking frames of the video frame according to the number of the blocking frame and the splicing information;
the second rendering submodule 20034 is configured to render the n block frames to block positions corresponding to the block frames, respectively, so as to obtain a video frame of the video.
It should be noted that: the apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
An embodiment of the present application also provides an electronic device, including: the video processing apparatus comprises a processor and a memory, wherein at least one instruction, at least one program, a code set or an instruction set is stored in the memory of the apparatus, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to realize the video processing method or the video playing method provided by the method embodiments.
Optionally, at least one of a video processing client and a video playing client is installed on the electronic device, and the electronic device is a terminal. Illustratively, fig. 25 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
In general, terminal 2500 includes: a processor 2501 and a memory 2502.
The processor 2501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2501 may be integrated with a GPU (Graphics Processing Unit) for rendering and drawing content required to be displayed on the display screen. In some embodiments, the processor 2501 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 2502 may include one or more computer-readable storage media, which may be non-transitory. Memory 2502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 2502 is used to store at least one instruction for execution by the processor 2501 to implement a video processing method or a video playing method provided by method embodiments herein.
In some embodiments, the terminal 2500 may further optionally include: a peripheral interface 2503 and at least one peripheral. The processor 2501, memory 2502, and peripheral interface 2503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 2503 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2504, a display screen 2505, a camera assembly 2506, audio circuitry 2507, a positioning assembly 2508, and a power supply 2509.
The peripheral interface 2503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2501 and the memory 2502. In some embodiments, the processor 2501, memory 2502, and peripheral interface 2503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2501, the memory 2502, and the peripheral device interface 2503 may be implemented on a single chip or circuit board, which is not limited in this application.
The Radio Frequency circuit 2504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2504 communicates with a communication network and other communication devices by electromagnetic signals. The rf circuit 2504 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 2504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 2504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 2504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2505 is a touch display screen, the display screen 2505 also has the ability to capture touch signals on or over the surface of the display screen 2505. The touch signal may be input to the processor 2501 as a control signal for processing. At this point, the display 2505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 2505 may be one, providing a front panel of the terminal 2500; in other embodiments, the display 2505 can be at least two, respectively disposed on different surfaces of the terminal 2500 or in a folded design; in still other embodiments, display 2505 may be a flexible display disposed on a curved surface or on a folded surface of terminal 2500. Even more, the display 2505 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 2505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
Camera assembly 2506 is used to capture images or video. Optionally, camera assembly 2506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal 2500 and the rear camera is disposed on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 2507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2501 for processing, or inputting the electric signals to the radio frequency circuit 2504 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 2500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2501 or the radio frequency circuitry 2504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 2507 may also include a headphone jack.
The positioning component 2508 is used to locate the current geographic position of the terminal 2500 for navigation or LBS (Location Based Service). The Positioning component 2508 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, or the galileo System in russia.
A power supply 2509 is used to provide power to the various components in terminal 2500. Power supply 2509 can be an alternating current, direct current, disposable battery, or rechargeable battery. When power supply 2509 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2500 also includes one or more sensors 2510. The one or more sensors 2510 include, but are not limited to: acceleration sensor 2511, gyro sensor 2512, pressure sensor 2513, fingerprint sensor 2514, optical sensor 2515, and proximity sensor 2516.
The acceleration sensor 2511 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 2500. For example, the acceleration sensor 2511 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 2501 may control the touch display screen 2505 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2511. The acceleration sensor 2511 may also be used for game or user motion data acquisition.
The gyro sensor 2512 may detect a body direction and a rotation angle of the terminal 2500, and the gyro sensor 2512 may cooperate with the acceleration sensor 2511 to acquire a 3D motion of the user on the terminal 2500. The processor 2501 may implement the following functions according to the data collected by the gyro sensor 2512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 2513 may be disposed on the side frames of terminal 2500 and/or on the lower layers of touch screen display 2505. When the pressure sensor 2513 is disposed on the side frame of the terminal 2500, a user's holding signal of the terminal 2500 can be detected, and the processor 2501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2513. When the pressure sensor 2513 is disposed at the lower layer of the touch display screen 2505, the processor 2501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 2505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2514 is used for collecting a fingerprint of a user, and the processor 2501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 2514, or the fingerprint sensor 2514 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 2501 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 2514 may be disposed on the front, back, or side of the terminal 2500. When a physical key or vendor Logo is provided on the terminal 2500, the fingerprint sensor 2514 may be integrated with the physical key or vendor Logo.
The optical sensor 2515 is used to collect ambient light intensity. In one embodiment, the processor 2501 may control the display brightness of the touch display screen 2505 based on the ambient light intensity collected by the optical sensor 2515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2505 is turned down. In another embodiment, processor 2501 may also dynamically adjust the imaging parameters of camera assembly 2506 based on the intensity of ambient light collected by optical sensor 2515.
A proximity sensor 2516, also known as a distance sensor, is typically provided on the front panel of the terminal 2500. The proximity sensor 2516 is used to collect the distance between the user and the front of the terminal 2500. In one embodiment, the processor 2501 controls the touch display screen 2505 to switch from a bright screen state to a dark screen state when the proximity sensor 2516 detects that the distance between the user and the front surface of the terminal 2500 is gradually decreased; when the proximity sensor 2516 detects that the distance between the user and the front surface of the terminal 2500 is gradually increased, the processor 2501 controls the touch display screen 2505 to switch from a breath screen state to a bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 25 does not constitute a limitation of terminal 2500, and may include more or fewer components than shown, or combine certain components, or employ a different arrangement of components.
Optionally, the electronic device can also be a server. For example, fig. 26 is a schematic structural diagram of a server provided in an embodiment of the present application.
The server 2600 includes a Central Processing Unit (CPU) 2601, a system Memory 2604 including a Random Access Memory (RAM) 2602 and a Read-Only Memory (ROM) 2603, and a system bus 2605 connecting the system Memory 2604 and the CPU 2601. The computer device 2600 also includes a basic Input/Output system (I/O system) 2606, which facilitates transfer of information between devices within the computer device, and a mass storage device 2607 for storing an operating system 2613, application programs 2614, and other program modules 2615.
The basic input/output system 2606 includes a display 2608 for displaying information and an input device 2609, such as a mouse, keyboard, etc., for user input of information. Wherein the display 2608 and the input device 2609 are connected to the central processing unit 2601 through an input-output controller 2610 connected to the system bus 2605. The basic input/output system 2606 may also include an input/output controller 2610 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 2610 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 2607 is connected to the central processing unit 2601 through a mass storage controller (not shown) connected to the system bus 2605. The mass storage device 2607 and its associated computer-readable storage media provide non-volatile storage for the server 2600. That is, the mass storage device 2607 may include a computer-readable storage medium (not shown) such as a hard disk or a Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable storage media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable storage instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory devices, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 2604 and mass storage device 2607 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 2601, the one or more programs containing instructions for implementing the method embodiments described above, and the central processing unit 2601 executes the one or more programs to implement the methods provided by the various method embodiments described above.
According to various embodiments of the present application, the server 2600 may also operate as a remote server connected to a network through a network, such as the internet. That is, the server 2600 may be connected to the network 2612 through a network interface unit 2611 coupled to the system bus 2605, or the network interface unit 2611 may be used to connect to other types of networks or remote server systems (not shown).
The memory also includes one or more programs, which are stored in the memory, and the one or more programs include instructions for performing the steps performed by the server in the methods provided by the embodiments of the present application.
The embodiment of the present application further provides a computer storage medium, where at least one instruction, at least one program, a code set, or a set of instructions may be stored in the storage medium, and when the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor of a computer device, the video processing method or the video playing method provided by the foregoing method embodiments is implemented.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an example of the present application and should not be taken as limiting, and any modifications, equivalent switches, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (17)

1. A method of video processing, the method comprising:
acquiring a video;
dividing each video frame in the video according to a division parameter to obtain n block frames of each video frame, wherein n is an integer greater than 1;
storing the n block frames of each video frame to obtain the storage position of each block frame;
and generating splicing information of the n blocking frames of each video frame, wherein the storage positions of the n blocking frames of the video frames and the splicing information are used for restoring and playing the video.
2. The method of claim 1, wherein the partition parameter comprises a number of partitions;
the dividing each video frame in the video according to the dividing parameters to obtain n block frames of each video frame includes:
determining a block position of each block frame in the video frame according to the number of blocks and the resolution of the video frame, wherein the block position is used for indicating the position of each block frame in the corresponding video frame;
and dividing each video frame in the video according to the block position of each block frame to obtain n block frames of each video frame.
3. The method of claim 1, wherein the decoding timestamps of the video frames are the same as the decoding timestamps of the n block frames of the video frames; the storing the n block frames of each video frame to obtain the storage location of each block frame includes:
numbering the n block frames of each video frame according to the sequence of the decoding timestamps and the corresponding block positions of the block frames, wherein the numbers are used for indicating the block positions to which the block frames belong and the sequence for decoding the block frames;
and storing the n block frames of each video frame into a target file according to the sequence of the numbers, wherein the storage position of the target file is the storage position of each block frame.
4. The method of claim 3, wherein the numbering comprises timestamp numbering and position numbering, and wherein the numbering of the n block frames of each of the video frames according to the order of the decoding timestamps and the corresponding block positions of each of the block frames comprises:
for any one block frame of each video frame, generating the timestamp number according to the decoding timestamp of the block frame;
generating the position number according to the block position corresponding to the block frame;
and splicing the timestamp numbers and the position numbers to obtain the numbers of the block frames.
5. The method of claim 4, wherein the generating the splicing information of the n block frames of each video frame comprises:
and generating corresponding information of the position number and the block position of each block frame to obtain the splicing information.
6. The method of claim 1, wherein the decoding timestamps of the video frames are the same as the decoding timestamps of the n block frames of the video frames; the storing the n block frames of each video frame to obtain the storage location of each block frame includes:
numbering the n block frames of each video frame according to the same numbering rule according to the block positions corresponding to the block frames, wherein the numbering is used for indicating the block positions to which the block frames belong;
and storing the n block frames belonging to the same decoding time stamp at the same position, wherein the same position is the storage position of the block frame.
7. The method according to claim 2 or 6, wherein the generating the splicing information of the n block frames of each video frame comprises:
and generating corresponding information of the serial number of each block frame and the block position and corresponding information of the storage position and the decoding time stamp of the block frame to obtain the splicing information.
8. A video playback method, the method comprising:
acquiring storage positions and splicing information of n block frames of a video frame, wherein the block frames are obtained by dividing each video frame in a video, and n is an integer greater than 1;
reading n block frames of the video frame according to the storage position;
splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video;
and playing the video frame of the video.
9. The method of claim 8, wherein the storage location comprises a storage location of a target file, the target file having the block frames stored therein by number, and wherein reading the n block frames of the video frame based on the storage location comprises:
and reading the n block frames from the target file according to the serial numbers to obtain the n block frames of the video frame.
10. The method of claim 9, wherein the number comprises a timestamp number;
the reading the n block frames from the target file according to the serial numbers to obtain the n block frames of the video frame includes:
and reading the n block frames from the target file according to the timestamp numbers.
11. The method according to claim 10, wherein the numbers further include position numbers, and the splicing information includes correspondence information between the position numbers and block positions, and the block positions are used to indicate positions of each of the block frames in the corresponding video frame;
the splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video comprises:
determining the blocking position of each blocking frame in n blocking frames of the video frame according to the position number of the blocking frame and the splicing information;
and respectively rendering the n blocking frames to the blocking positions corresponding to the blocking frames to obtain the video frames of the video.
12. The method according to claim 8, wherein the splicing information includes corresponding information of the storage location and decoding time stamps of the n block frames stored in the storage location, and the decoding time stamps of the n block frames are the same;
the reading n block frames of the video frame according to the storage location includes:
and reading n block frames of the video frame from the storage position according to the splicing information.
13. The method according to claim 12, wherein each of the block frames has a corresponding number, and the splicing information further includes corresponding information of the number of each of the block frames and a block position, the block position being used to indicate a position of each of the block frames in the corresponding video frame;
the splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video comprises:
determining the blocking position of each blocking frame in n blocking frames of the video frame according to the serial number of the blocking frame and the splicing information;
and respectively rendering the n blocking frames to the blocking positions corresponding to the blocking frames to obtain the video frames of the video.
14. A video processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a video;
the segmentation module is used for segmenting each video frame in the video according to segmentation parameters to obtain n block frames of each video frame, wherein n is an integer greater than 1;
the storage module is used for storing the n block frames of each video frame to obtain the storage position of each block frame;
and the generating module is used for generating splicing information of the n blocked frames of each video frame, and the storage positions of the n blocked frames of the video frames and the splicing information are used for restoring and playing the video.
15. A video playback apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the storage positions and splicing information of n block frames of video frames, the block frames are obtained by dividing each video frame in a video, and n is an integer larger than 1;
a reading module, configured to read n block frames of the video frame according to the storage location;
the splicing module is used for splicing the n block frames of the video frame according to the splicing information to obtain the video frame of the video;
and the playing module is used for playing the video frame of the video.
16. An electronic device, comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video processing method of any of claims 1 to 7 or the video playing method of any of claims 8 to 13.
17. A computer storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that, when loaded and executed by a processor of an electronic device, implement the video processing method of any of claims 1 to 7 or the video playback method of any of claims 8 to 13.
CN202010847663.2A 2020-08-21 2020-08-21 Video processing method, video playing method, device, equipment and storage medium Pending CN111935542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847663.2A CN111935542A (en) 2020-08-21 2020-08-21 Video processing method, video playing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847663.2A CN111935542A (en) 2020-08-21 2020-08-21 Video processing method, video playing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111935542A true CN111935542A (en) 2020-11-13

Family

ID=73306261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847663.2A Pending CN111935542A (en) 2020-08-21 2020-08-21 Video processing method, video playing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111935542A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911346A (en) * 2021-01-27 2021-06-04 北京淳中科技股份有限公司 Video source synchronization method and device
CN113286196A (en) * 2021-05-14 2021-08-20 湖北亿咖通科技有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN114567814A (en) * 2022-04-28 2022-05-31 阿里巴巴达摩院(杭州)科技有限公司 Video processing method, video rendering method, processor and storage medium
CN114615548A (en) * 2022-03-29 2022-06-10 湖南国科微电子股份有限公司 Video data processing method and device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06197133A (en) * 1992-12-24 1994-07-15 Toshiba Corp Communication system
CN101039417A (en) * 2007-04-26 2007-09-19 广东威创日新电子有限公司 Multi-block parallel compression video data apparatus and compression method thereof
CN102217315A (en) * 2008-11-12 2011-10-12 汤姆森特许公司 I-frame de-flickering for gop-parallel multi-thread video encoding
CN102547268A (en) * 2010-12-30 2012-07-04 深圳华强数码电影有限公司 Streaming media playback method and equipment
CN107018429A (en) * 2017-04-26 2017-08-04 陈翟 Internet video data compression and frame picture display process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06197133A (en) * 1992-12-24 1994-07-15 Toshiba Corp Communication system
CN101039417A (en) * 2007-04-26 2007-09-19 广东威创日新电子有限公司 Multi-block parallel compression video data apparatus and compression method thereof
CN102217315A (en) * 2008-11-12 2011-10-12 汤姆森特许公司 I-frame de-flickering for gop-parallel multi-thread video encoding
CN102547268A (en) * 2010-12-30 2012-07-04 深圳华强数码电影有限公司 Streaming media playback method and equipment
CN107018429A (en) * 2017-04-26 2017-08-04 陈翟 Internet video data compression and frame picture display process

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911346A (en) * 2021-01-27 2021-06-04 北京淳中科技股份有限公司 Video source synchronization method and device
CN113286196A (en) * 2021-05-14 2021-08-20 湖北亿咖通科技有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN113286196B (en) * 2021-05-14 2023-02-17 亿咖通(湖北)技术有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN114615548A (en) * 2022-03-29 2022-06-10 湖南国科微电子股份有限公司 Video data processing method and device and computer equipment
CN114615548B (en) * 2022-03-29 2023-12-26 湖南国科微电子股份有限公司 Video data processing method and device and computer equipment
CN114567814A (en) * 2022-04-28 2022-05-31 阿里巴巴达摩院(杭州)科技有限公司 Video processing method, video rendering method, processor and storage medium

Similar Documents

Publication Publication Date Title
CN111372126B (en) Video playing method, device and storage medium
CN108401124B (en) Video recording method and device
CN108966008B (en) Live video playback method and device
CN110022489B (en) Video playing method, device and storage medium
CN111083507B (en) Method and system for connecting to wheat, first main broadcasting terminal, audience terminal and computer storage medium
CN111935542A (en) Video processing method, video playing method, device, equipment and storage medium
CN110213608B (en) Method, device, equipment and readable storage medium for displaying virtual gift
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN108769738B (en) Video processing method, video processing device, computer equipment and storage medium
CN111753784A (en) Video special effect processing method and device, terminal and storage medium
CN110288689B (en) Method and device for rendering electronic map
CN109451248B (en) Video data processing method and device, terminal and storage medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN110830819A (en) Encoding method, decoding method, encoding end and decoding end
CN111010588B (en) Live broadcast processing method and device, storage medium and equipment
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN111083554A (en) Method and device for displaying live gift
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN108492339B (en) Method and device for acquiring resource compression packet, electronic equipment and storage medium
CN107888975B (en) Video playing method, device and storage medium
CN110971840B (en) Video mapping method and device, computer equipment and storage medium
CN111901679A (en) Method and device for determining cover image, computer equipment and readable storage medium
CN108540732B (en) Method and device for synthesizing video
US20240048726A1 (en) Decoding and encoding based on adaptive intra refresh mechanism
CN111093096A (en) Video encoding method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113