CN114845162B - Video playing method and device, electronic equipment and storage medium - Google Patents
Video playing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114845162B CN114845162B CN202110137036.4A CN202110137036A CN114845162B CN 114845162 B CN114845162 B CN 114845162B CN 202110137036 A CN202110137036 A CN 202110137036A CN 114845162 B CN114845162 B CN 114845162B
- Authority
- CN
- China
- Prior art keywords
- video
- texture data
- frame
- data
- frame texture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000009877 rendering Methods 0.000 claims abstract description 34
- 230000000694 effects Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the disclosure provides a video playing method, a video playing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring image frame data of a video, and sending a decoding instruction to a Graphic Processor (GPU), wherein the decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser; that is, when the video is played at the web end, the embodiment of the invention calls the GPU to carry out hard decoding on the image frame data, thereby reducing the occupancy rate of the CPU in the terminal equipment, improving the decoding efficiency and reducing the video clamping.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of video processing, in particular to a video playing method, a video playing device, electronic equipment and a storage medium.
Background
In the video playing process at the web (browser) side, a soft solution scheme is generally adopted, that is, a controller (software) at the web side is utilized to take charge of decoding and playing each video.
However, in the above scheme, the controller generally occupies a central processing unit (Central Processing Unit, abbreviated as CPU), and the excessive load of the CPU may cause the video to be stuck and dropped.
Disclosure of Invention
The embodiment of the disclosure provides a video playing method, a video playing device, electronic equipment and a storage medium, so as to solve the problem that video playing of a web end is easy to cause clamping in the prior art.
In a first aspect, an embodiment of the present disclosure provides a video playing method, including: acquiring image frame data of a video, and sending a decoding instruction to a Graphic Processor (GPU), wherein the decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded; acquiring the frame texture data; and rendering the frame texture data, and displaying the rendered frame texture data through a browser.
In a second aspect, an embodiment of the present disclosure provides a multi-track video playing device, including: the first acquisition module is used for acquiring image frame data of a video and sending a decoding instruction to the GPU, wherein the decoding instruction is used for indicating that the image frame data is decoded to obtain frame texture data; the second acquisition module is used for acquiring the frame texture data; the rendering processing module is used for rendering the frame texture data and displaying the rendered frame texture data through a browser.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the video playback method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable storage medium, where computer executable instructions are stored, which when executed by a processor, implement the video playing method according to the first aspect and the various possible designs of the first aspect.
The video playing method, device, electronic equipment and storage medium provided by the embodiment, wherein the method comprises the following steps: acquiring image frame data of a video, and sending a decoding instruction to a Graphic Processor (GPU), wherein the decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser; that is, when the video is played at the web end, the embodiment of the invention calls the GPU to carry out hard decoding on the image frame data, thereby reducing the occupancy rate of the CPU in the terminal equipment, improving the decoding efficiency and reducing the video clamping.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a prior art web-side video playback;
fig. 2 is a schematic diagram of a multi-track video according to an embodiment of the disclosure;
fig. 3 is a flowchart of a video playing method according to an embodiment of the disclosure;
fig. 4 is a second schematic flowchart of a video playing method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a process flow of multi-track video playing according to an embodiment of the disclosure;
fig. 6 is a block diagram of a video playing device according to an embodiment of the present disclosure;
fig. 7 is a schematic hardware structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
First, the terms involved in the present invention will be explained:
image processor (Graphics Processing Unit, GPU for short): the display core, the vision processor and the display chip are also called as a microprocessor which is specially used for performing image and graph related operation on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones and the like).
In the video playing process of the web side, a soft solution is generally adopted, that is, a controller (software) of the web side is utilized to take charge of decoding and playing each video.
Fig. 1 is a schematic diagram of video playing at a web end in the prior art, as shown in fig. 1, when the web end plays video, decoding and rendering of video frames are performed in a sub-thread of a CPU, and finally off-screen rendering of an offscreen Canvas can draw a picture into a Canvas tag in a document object model (Document Object Model, abbreviated as DOM). That is, in the existing scheme, the decoding, rendering, etc. of the video frame occupy the CPU, and the phenomenon of video jamming can occur due to the excessive load of the CPU.
Particularly, when the web side needs to play the multi-track video, as shown in fig. 2, the video, such as file1, file2, file3, is loaded in multiple tracks of the video processor of the web side, and the playing effect of the multi-track video is that each video is covered, and the video at the uppermost layer is played.
Aiming at the technical problems, the technical concept of the present disclosure is that a web controller is only responsible for scheduling playing and pausing of each video, does not care about the decoding process any more, and gives the decoding process to an image processor GPU for hard decoding, and the controller directly reads texture data of the current frame, thereby realizing playing of the video.
Referring to fig. 3, fig. 3 is a flowchart illustrating a video playing method according to an embodiment of the disclosure. The video playing method comprises the following steps:
s101, acquiring image frame data of a video, and sending a decoding instruction to a graphics processor GPU.
The decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded. Wherein, the video is played based on the multimedia video tag of the browser.
Specifically, the execution body of the embodiment is a controller on a terminal device, where the controller has a function of video editing on a web side, and is typically a central controller (Central Processing Unit, abbreviated as CPU); and simultaneously, a graphic processor GPU is configured on the terminal equipment.
As an alternative embodiment, the video is a multi-track video; the step S101 of acquiring image frame data of a video includes: and acquiring a plurality of image frame data corresponding to the multi-track video.
In this embodiment, when the multi-track video is played at the web end, the controller may acquire a plurality of image frame data corresponding to the multi-track video, and then send a decoding instruction to the GPU of the image processor, so that the GPU decodes each image frame data to obtain frame texture data, and then returns the frame texture data to the controller.
S102, acquiring the frame texture data.
Specifically, after the GPU decodes the frame texture data, it is returned to the controller, i.e., the controller acquires the frame texture data.
And S103, rendering the frame texture data, and displaying the rendered frame texture data through a browser.
Specifically, after the controller obtains the decoded frame texture data, the decoded frame texture data can be rendered according to rendering parameters and then displayed through a browser.
And repeating the steps 101-103, so that video playing can be realized on the web side.
In one embodiment of the present disclosure, based on the embodiment of fig. 3, before sending the decoding instruction to the GPU of the graphics processor in step 101, the method further includes: receiving a video playing instruction input by a user; and executing the step of sending a decoding instruction to the GPU according to the video playing instruction.
Specifically, in the video processor at the web end, only when receiving a video playing instruction input by a user, for example, clicking a "play" button of the video processor interface, the controller sends a decoding instruction to the GPU, and then decodes, renders and plays the video.
In one embodiment of the present disclosure, on the basis of the embodiment of fig. 3 described above, the method further includes: receiving a video pause instruction input by a user; and stopping executing the step of sending the decoding instruction to the GPU according to the video pause instruction.
Specifically, in the video processor at the web end, when the user inputs a video pause instruction, for example, clicks a "pause" button of the video processor interface, the controller stops sending a decoding instruction to the GPU, so that decoding and rendering are not performed any more, and playing of the video is stopped.
In summary, the controller of the terminal device in this embodiment is only responsible for scheduling the playing, pausing, etc. of each video, and does not perform the decoding process any more, but the decoding process is performed by the image processor, and the controller directly reads the frame texture data, and renders the frame texture data, thereby obtaining the current video picture and playing the current video picture.
The video playing method provided by the embodiment of the disclosure comprises the steps of obtaining image frame data of a video, and sending a decoding instruction to a Graphic Processor (GPU), wherein the decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded; acquiring the frame texture data; rendering the frame texture data, and displaying the rendered frame texture data through a browser; that is, when the video is played at the web end, the embodiment of the invention calls the GPU to carry out hard decoding on the image frame data, thereby reducing the occupancy rate of the CPU in the terminal equipment, improving the decoding efficiency and reducing the video clamping.
On the basis of the above embodiment, referring to fig. 4, fig. 4 is a second flowchart of a video playing method according to an embodiment of the present disclosure, where the video playing method includes:
s201, acquiring image frame data of a video, and sending a decoding instruction to the GPU through a main thread.
The decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded, and the video is played through the video tag.
S202, acquiring the frame texture data.
And S203, rendering the frame texture data through a main thread, and displaying the rendered frame texture data through a browser.
Step 202 in this embodiment is similar to the implementation of step 102 in the previous embodiment, and will not be described here.
Unlike the previous embodiments, this embodiment further defines a specific implementation of the decoding and rendering process of video frames. In this embodiment, a decode instruction is sent to the GPU by the main thread, and frame texture data is rendered by the main thread.
Specifically, when the web-side video processor runs, there may be a main thread and at least one working (worker) thread, where only the main thread can operate the DOM, for example, a video tag, the worker thread cannot acquire the DOM state, cannot operate the DOM, and many key components cannot be used by the worker thread. Therefore, in this embodiment, after loading the multi-track video, a decoding instruction is sent to the GPU through the main thread, and then the GPU returns the frame texture data obtained by decoding to the main thread, and renders the frame texture data in the main thread, so as to obtain and play the current video frame.
In one embodiment of the present disclosure, on the basis of the embodiment of fig. 3, before the rendering of the frame texture data by the main thread, the method further includes: editing the frame texture data through a sub-thread to obtain frame texture data after editing; the rendering of frame texture data by a main thread includes: and rendering the edited frame texture data through the main thread.
Specifically, after a decoding instruction is sent to the GPU through a main thread of the CPU, the CPU acquires frame texture data decoded by the GPU, the CPU edits the frame texture data through a sub-thread, and the frame texture data after the editing is rendered in the main thread. Referring to fig. 5, fig. 5 is a flowchart of a video playing process according to an embodiment of the present disclosure. In the main thread, after hard decoding is carried out through a video tag, frame texture data is obtained, the frame texture data is transferred to a sub thread (other threads), after customized reader is carried out on the frame texture data in the sub thread, the frame texture data is returned to a rendering queue in the main thread after being processed by an Editor (each component), and is rendered to Canvas in a DOM, so that a current video picture is obtained. All textures are produced by the main thread, and the main thread consumes to form a virtual data stream.
In one embodiment of the present disclosure, in the child thread, the editing process includes at least one of: scaling, rotating, translating, adding special effects. In particular, various editing processes on the frame texture data, such as scaling, panning, rotating, adding special effects, etc., may be put into each component in the sub-threads to operate, as in each of the other threads shown in FIG. 5.
In addition, when the sub-thread edits the multi-track video, each track video is processed individually.
On the basis of the embodiment, a decoding instruction is sent to the GPU through a main thread, and frame texture data is rendered through the main thread; that is, when the embodiment of the invention plays the multi-track video at the web end, the image processor GPU is called to carry out hard decoding on the multi-track video, so that the occupancy rate of the CPU in the terminal equipment can be reduced, and the clamping of the multi-track high-definition video can be reduced.
Corresponding to the video playing method of the above embodiment, fig. 6 is a block diagram of a video playing device according to an embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 6, the apparatus includes: the first acquisition module 10, the second acquisition module 20 and the rendering processing module 30.
The first acquiring module 10 is configured to acquire image frame data of a video, and send a decoding instruction to the graphics processor GPU, where the decoding instruction is configured to instruct decoding of the image frame data to obtain frame texture data; a second acquiring module 20, configured to acquire the frame texture data; the rendering processing module 30 is configured to render the frame texture data, and display the rendered frame texture data through a browser.
In one embodiment of the present disclosure, the video is a multi-track video; the first obtaining module 10 is specifically configured to: and acquiring a plurality of image frame data corresponding to the multi-track video.
In one embodiment of the disclosure, the first obtaining module is specifically configured to: and sending a decoding instruction to the GPU through the main thread.
In one embodiment of the present disclosure, the rendering processing module 30 is specifically configured to: rendering the frame texture data by a main thread.
In one embodiment of the present disclosure, the rendering processing module 30 is further configured to: editing the frame texture data through a sub-thread to obtain frame texture data after editing; and rendering the edited frame texture data through the main thread.
In one embodiment of the present disclosure, the editing process includes at least one of: scaling, rotating, translating, adding special effects.
The device provided in this embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
The embodiment of the disclosure also provides an electronic device, including: at least one processor and memory; the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the video playback method as described above in the first aspect and the various possible designs of the first aspect.
Referring to fig. 7, there is shown a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure, which electronic device 700 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic apparatus 700 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 701 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage device 708 into a random access Memory (Random Access Memory, RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other through a bus 904. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
That is, the embodiments of the present disclosure further provide a computer readable storage medium having stored therein computer executable instructions that when executed by a processor implement the video playing method according to the first aspect and the various possible designs of the first aspect.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
Claims (8)
1. A video playing method, the method comprising:
acquiring image frame data of a video, and sending a decoding instruction to a Graphic Processor (GPU), wherein the decoding instruction is used for indicating that frame texture data is obtained after the image frame data is decoded;
acquiring the frame texture data;
rendering the frame texture data, and displaying the rendered frame texture data through a browser;
the rendering of the frame texture data includes:
rendering the frame texture data through a main thread;
before the rendering of the frame texture data by the main thread, the method further comprises:
editing the frame texture data through a sub-thread to obtain frame texture data after editing;
the rendering of frame texture data by a main thread includes:
and rendering the edited frame texture data through the main thread.
2. The method of claim 1, wherein the video is a multi-track video; the acquiring image frame data of the video includes:
and acquiring a plurality of image frame data corresponding to the multi-track video.
3. The method of claim 1 or 2, wherein the sending decoding instructions to the graphics processor GPU comprises:
and sending a decoding instruction to the GPU through the main thread.
4. The method of claim 1, wherein the editing process comprises at least one of:
scaling, rotating, translating, adding special effects.
5. A multi-track video playback device, comprising:
the first acquisition module is used for acquiring image frame data of a video and sending a decoding instruction to the GPU, wherein the decoding instruction is used for indicating that the image frame data is decoded to obtain frame texture data;
the second acquisition module is used for acquiring the frame texture data;
the rendering processing module is used for rendering the frame texture data and displaying the rendered frame texture data through a browser;
the rendering processing module is specifically configured to edit the frame texture data through a sub-thread, so as to obtain frame texture data after the editing processing; and rendering the edited frame texture data through the main thread.
6. The apparatus of claim 5, wherein the video is a multi-track video; the first obtaining module is specifically configured to:
and acquiring a plurality of image frame data corresponding to the multi-track video.
7. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the video playback method of any one of claims 1 to 4.
8. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the video playback method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110137036.4A CN114845162B (en) | 2021-02-01 | 2021-02-01 | Video playing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110137036.4A CN114845162B (en) | 2021-02-01 | 2021-02-01 | Video playing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114845162A CN114845162A (en) | 2022-08-02 |
CN114845162B true CN114845162B (en) | 2024-04-02 |
Family
ID=82561170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110137036.4A Active CN114845162B (en) | 2021-02-01 | 2021-02-01 | Video playing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114845162B (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6430591B1 (en) * | 1997-05-30 | 2002-08-06 | Microsoft Corporation | System and method for rendering electronic images |
GB0216275D0 (en) * | 2001-07-12 | 2002-08-21 | Nec Corp | Multi-thread executing method and parallel processing system |
US8913068B1 (en) * | 2011-07-12 | 2014-12-16 | Google Inc. | Displaying video on a browser |
CN104853254A (en) * | 2015-05-26 | 2015-08-19 | 深圳市理奥网络技术有限公司 | Video playing method and mobile terminal |
CN105933724A (en) * | 2016-05-23 | 2016-09-07 | 福建星网视易信息系统有限公司 | Video producing method, device and system |
CN107277616A (en) * | 2017-07-21 | 2017-10-20 | 广州爱拍网络科技有限公司 | Special video effect rendering intent, device and terminal |
CN107948735A (en) * | 2017-12-06 | 2018-04-20 | 北京金山安全软件有限公司 | Video playing method and device and electronic equipment |
EP3407563A1 (en) * | 2017-05-26 | 2018-11-28 | INTEL Corporation | Method, apparatus and machine readable medium for accelerating network security monitoring |
CN109218802A (en) * | 2018-08-23 | 2019-01-15 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109587559A (en) * | 2018-11-27 | 2019-04-05 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and storage medium |
CN109862409A (en) * | 2019-03-18 | 2019-06-07 | 广州市网星信息技术有限公司 | Video decoding, playback method, device, system, terminal and storage medium |
CN110620954A (en) * | 2018-06-20 | 2019-12-27 | 北京优酷科技有限公司 | Video processing method and device for hard solution |
CN110704768A (en) * | 2019-10-08 | 2020-01-17 | 支付宝(杭州)信息技术有限公司 | Webpage rendering method and device based on graphics processor |
CN111355978A (en) * | 2018-12-21 | 2020-06-30 | 北京字节跳动网络技术有限公司 | Video file processing method and device, mobile terminal and storage medium |
CN111405288A (en) * | 2020-03-19 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Video frame extraction method and device, electronic equipment and computer readable storage medium |
WO2020248668A1 (en) * | 2019-06-10 | 2020-12-17 | 海信视像科技股份有限公司 | Display and image processing method |
CN112291628A (en) * | 2020-11-25 | 2021-01-29 | 杭州视洞科技有限公司 | Multithreading video decoding playing method based on web browser |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130187933A1 (en) * | 2012-01-23 | 2013-07-25 | Google Inc. | Rendering content on computing systems |
CN115883857A (en) * | 2021-09-27 | 2023-03-31 | 北京字跳网络技术有限公司 | Live gift cloud rendering method and device, electronic equipment and storage medium |
-
2021
- 2021-02-01 CN CN202110137036.4A patent/CN114845162B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6430591B1 (en) * | 1997-05-30 | 2002-08-06 | Microsoft Corporation | System and method for rendering electronic images |
GB0216275D0 (en) * | 2001-07-12 | 2002-08-21 | Nec Corp | Multi-thread executing method and parallel processing system |
US8913068B1 (en) * | 2011-07-12 | 2014-12-16 | Google Inc. | Displaying video on a browser |
CN104853254A (en) * | 2015-05-26 | 2015-08-19 | 深圳市理奥网络技术有限公司 | Video playing method and mobile terminal |
CN105933724A (en) * | 2016-05-23 | 2016-09-07 | 福建星网视易信息系统有限公司 | Video producing method, device and system |
EP3407563A1 (en) * | 2017-05-26 | 2018-11-28 | INTEL Corporation | Method, apparatus and machine readable medium for accelerating network security monitoring |
CN107277616A (en) * | 2017-07-21 | 2017-10-20 | 广州爱拍网络科技有限公司 | Special video effect rendering intent, device and terminal |
CN107948735A (en) * | 2017-12-06 | 2018-04-20 | 北京金山安全软件有限公司 | Video playing method and device and electronic equipment |
CN110620954A (en) * | 2018-06-20 | 2019-12-27 | 北京优酷科技有限公司 | Video processing method and device for hard solution |
CN109218802A (en) * | 2018-08-23 | 2019-01-15 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109587559A (en) * | 2018-11-27 | 2019-04-05 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and storage medium |
CN111355978A (en) * | 2018-12-21 | 2020-06-30 | 北京字节跳动网络技术有限公司 | Video file processing method and device, mobile terminal and storage medium |
CN109862409A (en) * | 2019-03-18 | 2019-06-07 | 广州市网星信息技术有限公司 | Video decoding, playback method, device, system, terminal and storage medium |
WO2020248668A1 (en) * | 2019-06-10 | 2020-12-17 | 海信视像科技股份有限公司 | Display and image processing method |
CN110704768A (en) * | 2019-10-08 | 2020-01-17 | 支付宝(杭州)信息技术有限公司 | Webpage rendering method and device based on graphics processor |
CN111405288A (en) * | 2020-03-19 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Video frame extraction method and device, electronic equipment and computer readable storage medium |
CN112291628A (en) * | 2020-11-25 | 2021-01-29 | 杭州视洞科技有限公司 | Multithreading video decoding playing method based on web browser |
Also Published As
Publication number | Publication date |
---|---|
CN114845162A (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101239029B1 (en) | Multi-buffer support for off-screen surfaces in a graphics processing system | |
US9077970B2 (en) | Independent layered content for hardware-accelerated media playback | |
US20220417486A1 (en) | Method and apparatus for processing three-dimensional video, readable storage medium and electronic device | |
CN110070496A (en) | Generation method, device and the hardware device of image special effect | |
CN115359226B (en) | Texture compression-based VR display method for Hongmong system, electronic device and medium | |
CN113377366A (en) | Control editing method, device, equipment, readable storage medium and product | |
CN113507637A (en) | Media file processing method, device, equipment, readable storage medium and product | |
CN111355978B (en) | Video file processing method and device, mobile terminal and storage medium | |
CN113411661B (en) | Method, apparatus, device, storage medium and program product for recording information | |
CN114845162B (en) | Video playing method and device, electronic equipment and storage medium | |
CN111355960B (en) | Method and device for synthesizing video file, mobile terminal and storage medium | |
CN112070868B (en) | Animation playing method based on iOS system, electronic equipment and medium | |
CN113747226A (en) | Video display method and device, electronic equipment and program product | |
WO2024140069A1 (en) | Video processing method and apparatus, and electronic device | |
CN115802104A (en) | Video skip playing method and device, electronic equipment and storage medium | |
CN116095250B (en) | Method and device for video cropping | |
CN117528096A (en) | Image processing method, apparatus, storage medium, and program product | |
CN117615217A (en) | Method and system for realizing transparent video atmosphere in applet | |
WO2023030402A1 (en) | Video processing method, apparatus and system | |
WO2023011557A1 (en) | Image processing method and apparatus, and device | |
CN115623241A (en) | Video export method and terminal equipment | |
WO2024140126A1 (en) | Online video editing method and apparatus, and electronic device and storage medium | |
WO2024174923A1 (en) | Image processing method and apparatus, and electronic device | |
CN118264850A (en) | Video processing method and device and playing equipment | |
CN115529421A (en) | Video processing method, device, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |