CN111355978B - Video file processing method and device, mobile terminal and storage medium - Google Patents

Video file processing method and device, mobile terminal and storage medium Download PDF

Info

Publication number
CN111355978B
CN111355978B CN201811574273.1A CN201811574273A CN111355978B CN 111355978 B CN111355978 B CN 111355978B CN 201811574273 A CN201811574273 A CN 201811574273A CN 111355978 B CN111355978 B CN 111355978B
Authority
CN
China
Prior art keywords
video
video file
texture
rendered
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811574273.1A
Other languages
Chinese (zh)
Other versions
CN111355978A (en
Inventor
宫昀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811574273.1A priority Critical patent/CN111355978B/en
Publication of CN111355978A publication Critical patent/CN111355978A/en
Application granted granted Critical
Publication of CN111355978B publication Critical patent/CN111355978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a video file processing method, a video file processing device, a mobile terminal and a storage medium, wherein the method comprises the following steps: decoding a video file obtained by a video client based on a mobile terminal to obtain a corresponding video frame image; rendering the video frame image to a texture to obtain a corresponding rendered texture; coding to obtain a first video file issued by the video client based on the rendered texture; and carrying out watermarking processing on the first video file to obtain a second video file which is used for sharing to other video clients and added with watermarks.

Description

Video file processing method and device, mobile terminal and storage medium
Technical Field
The present disclosure relates to media playing technologies, and in particular, to a method and an apparatus for processing a video file, a mobile terminal, and a storage medium.
Background
When a user issues a video, a short video Application (APP) needs to synthesize two video files, one for issuing and uploading and the other for sharing, so as to pull a new user. However, the short video APP in the related art can only generate the video file for publishing and uploading and the video file for sharing separately, and cannot obtain the two video files at the same time, which results in poor user experience.
Disclosure of Invention
In view of this, the present disclosure provides a method and an apparatus for processing a video file, a mobile terminal and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for processing a video file, where the method includes:
decoding a video file obtained by a video client based on a mobile terminal to obtain a corresponding video frame image;
rendering the video frame image to texture to obtain corresponding rendered texture;
coding to obtain a first video file for the video client to issue based on the rendered texture;
and carrying out watermarking processing on the first video file to obtain a second video file which is used for sharing to other video clients and added with watermarks.
In the above solution, the encoding to obtain the first video file issued by the video client based on the rendered texture includes:
and responding to a coding instruction sent by a Graphics Processing Unit (GPU), and coding the rendered texture by adopting a Digital Signal Processor (DSP) to obtain a first video file issued by the video client.
In the foregoing solution, the encoding to obtain the first video file for the video client to issue based on the rendered texture includes:
the CPU reads the rendered texture to a memory to obtain an image corresponding to the memory;
and coding the image in the memory by adopting a CPU (Central processing Unit) to obtain a video file for the video client to release.
In the foregoing solution, the encoding to obtain the first video file for the video client to issue based on the rendered texture includes:
initializing a DSP for encoding the rendered texture;
when the initialization is determined to be successful, the rendered texture is encoded by the DSP, and a first video file issued by the video client is obtained;
and when the initialization fails, adopting a CPU to encode the rendered texture to obtain a first video file issued by the video client.
In the foregoing scheme, the watermarking the first video file to obtain a second video file with a watermark for sharing to another video client includes:
decoding the first video file to obtain a video frame image of the first video file;
rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark;
and coding the texture added with the watermark by adopting a hardware coding mode to obtain a second video file which is used for sharing to other video clients and is added with the watermark.
In the foregoing solution, the rendering the video frame image to a texture to obtain a corresponding rendered texture includes:
loading the video frame image in the memory to a GPU to obtain corresponding texture;
and rendering the obtained texture by adopting a GPU to obtain the rendered texture.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for processing a video file, where the apparatus includes:
the first decoding unit is used for decoding a video file obtained by a video client based on the mobile terminal to obtain a corresponding video frame image;
the first rendering unit is used for rendering the video frame image to texture to obtain corresponding rendered texture;
a first encoding unit, configured to encode, based on the rendered texture, a first video file to be published by the video client;
and the watermarking unit is used for performing watermarking processing on the first video file to obtain a second video file which is used for sharing to other video clients and is added with watermarks.
In the above scheme, the first encoding unit is implemented by using a DSP;
the first encoding unit is further configured to encode the rendered texture in response to an encoding instruction sent by the GPU, so as to obtain a first video file for the video client to issue.
In the above scheme, the first encoding unit is implemented by a CPU;
the first encoding unit is further configured to read the rendered texture to a memory to obtain an image corresponding to the memory;
and coding the image in the memory to obtain a first video file for the video client to release.
In the above scheme, the apparatus further comprises an initialization unit;
the initialization unit is used for initializing a DSP (digital signal processor) used for encoding the rendered texture;
when the initialization is determined to be successful, calling the DSP to encode the rendered texture to obtain a first video file issued by the video client;
and when the initialization is determined to fail, triggering a CPU to encode the rendered texture to obtain a first video file issued by the video client.
In the foregoing solution, the watermarking unit includes:
the second decoding unit is used for decoding the first video file to obtain a video frame image of the first video file;
the second rendering unit is used for rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark;
and the second encoding unit is used for encoding the texture added with the watermark in a hardware encoding mode to obtain a second video file added with the watermark and used for sharing to other video clients.
In the above scheme, the first rendering unit is further configured to load the video frame image in the memory to the GPU to obtain a corresponding texture;
rendering the obtained texture to obtain the rendered texture.
In a third aspect, an embodiment of the present disclosure further provides a mobile terminal, including:
a memory for storing executable instructions;
and the processor is used for realizing the video file processing method provided by the embodiment of the disclosure when executing the executable instructions stored in the memory.
In a fourth aspect, the present disclosure further provides a storage medium storing executable instructions, where the executable instructions, when executed, are configured to implement the video file processing method provided in the present disclosure.
The application of the above embodiment of the present disclosure has the following beneficial effects:
by applying the embodiment of the disclosure, the video file issued by the video client can be simultaneously obtained in one processing based on the video file obtained by the video client of the mobile terminal, and the video file added with the watermark and used for sharing to other clients can be obtained.
Drawings
Fig. 1 is a schematic architecture diagram of a video file processing system according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a composition structure of a mobile terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a video file processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic interface diagram of a video client according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a video file processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a video file processing method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a processing apparatus for a video file according to an embodiment of the present disclosure.
Detailed Description
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present disclosure belong. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the disclosure.
The flowchart and block diagrams in the figures provided by the disclosed embodiments illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that the terms "first \ second" and "first \ second" are used herein only to distinguish similar objects and do not denote a particular order or sequence of objects, and it should be understood that "first \ second" and "first \ second" may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be practiced in other than those illustrated or described herein.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) The watermark is that some digital information is added into multimedia (images, videos and the like) to achieve the functions of file authenticity identification, copyright protection and the like, and the embedded watermark is arranged in a host file without influencing the observability and integrity of an original file.
2) Texture, the characteristic of objects in a video frame that are geometrically regular in color, is represented by the texture coordinates and corresponding color values of each texel in the video frame.
3) In response to the condition or state indicating that the executed operation depends on, one or more of the executed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a video file processing system provided by an embodiment of the present disclosure, in order to support an exemplary application, a mobile terminal 10 (an exemplary mobile terminal 10-1 and a mobile terminal 10-2 are shown) is connected to a server 30 through a network 20, where the network 20 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless link.
A video client (such as a live video client) is arranged on the mobile terminal 10, and an original video file is obtained through the video client and decoded to obtain a corresponding video frame image; rendering the video frame image to texture to obtain corresponding rendered texture; respectively synthesizing video files used for the video client to issue and synthesizing video files added with watermarks used for sharing to other clients in a parallel processing mode based on the rendered textures;
accordingly, the mobile terminal 10 is further configured to display videos published by other mobile terminals on the graphical interface 110 (the graphical interface 110-1 and the graphical interface 110-2 are exemplarily shown) through the video client.
The server 30 is configured to receive a video file issued by the mobile terminal 10 through the video client, so that other mobile terminals can play the video file through the video client;
and the video file sharing method is used for receiving the video file which is sent by the mobile terminal 10 through the video client and is shared to other clients, and the video file which is shared to other clients and is added with the watermark is played by the mobile terminal through the other shared clients.
Referring to fig. 2, fig. 2 is a schematic diagram of a composition structure of a mobile terminal implementing an embodiment of the present disclosure. In the disclosed embodiments, the mobile terminal includes, but is not limited to, a mobile terminal such as a mobile phone, a Personal Digital Assistant (PDA), a PAD, a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), and the like. The mobile terminal shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 2, the mobile terminal may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 210, which may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 220 or a program loaded from a storage device 280 into a Random Access Memory (RAM) 230. In the RAM230, various programs and data necessary for the operation of the mobile terminal are also stored. The processing device 210, the ROM220, and the RAM230 are connected to each other through a bus 240. An Input/Output (I/O) interface 250 is also connected to bus 240.
Generally, the following devices may be connected to I/O interface 250: input devices 260 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 270 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, or the like; storage devices 280 including, for example, magnetic tape, hard disk, etc.; and a communication device 290. The communication means 290 may allow the mobile terminal to communicate with other devices wirelessly or by wire to exchange data. While fig. 2 illustrates a mobile terminal having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described by the provided flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program containing program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network through communication device 290, or installed from storage device 280, or installed from ROM 220. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 210.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the disclosed embodiments, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the mobile terminal; or may exist separately and not be assembled into the mobile terminal.
The computer readable medium carries one or more programs, and when the one or more programs are executed by the mobile terminal, the mobile terminal is enabled to execute the video file processing method provided by the embodiment of the disclosure.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) and a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The units and/or modules described in the embodiments of the present disclosure may be implemented by software or hardware.
As a hardware manner, the units and/or modules of the mobile terminal implementing the embodiments of the present disclosure may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components, and are used to execute the method provided by the embodiments of the present disclosure.
Fig. 3 is a schematic flowchart of a video file processing method according to an embodiment of the present disclosure, and referring to fig. 3, the video file processing method according to the embodiment of the present disclosure includes:
step 301: and decoding the video file obtained by the video client based on the mobile terminal to obtain a corresponding video frame image.
In practical application, the video file processing method of the embodiment of the disclosure can be implemented by a mobile terminal (such as a mobile phone), a video client (such as a live broadcast client, a short video APP, and the like) is arranged on the mobile terminal, a user performs video recording through the video client installed in the mobile terminal to obtain an original video file, and in the video recording process, the user can edit the recorded video through the video client, such as adding a special effect, a filter, and the like, to synthesize a video file added with the special effect or the filter, and publish the video file through the video client.
Fig. 4 is an interface schematic diagram of a video client according to an embodiment of the present disclosure, referring to fig. 4, after a user finishes recording a video by using the video client, a "publish" button may be clicked to synthesize a video file with special effect processing, and when the user simultaneously clicks "save local" on an interface, two video files may be synthesized at the same time, one video file is published by a video client (e.g., video client a), and the other video file is saved and shared to another video client (e.g., video client B).
In an actual implementation, the mobile terminal receives a video composition instruction triggered by a user through an interface of the video client, and decodes a video file obtained based on the video client to obtain a corresponding video frame image, and in an embodiment, the mobile terminal can obtain the video frame image in the following manner:
and decoding the video file obtained based on the video client based on the packaging format of the video file to obtain a video frame image (namely buffer) in the memory.
Step 302: rendering the video frame image to texture to obtain corresponding rendered texture.
In an embodiment, obtaining the rendered texture may be achieved by:
loading the video frame image in the memory to a GPU to obtain corresponding texture (texture); and rendering the obtained texture by adopting a GPU to obtain the rendered texture. The rendering described herein is used to add a special effect or a filter, and the like, so that the texture obtained by rendering the texture is the texture to which the special effect/the filter is added.
Step 303: and coding to obtain a first video file for video client release based on the rendered texture.
In an embodiment, a hardware coding mode may be used to synthesize the first video file issued by the video client, specifically, after the GPU obtains the rendered texture, the GPU sends a coding instruction to the DSP, and the DSP responds to the coding instruction sent by the GPU to code the rendered texture, so as to obtain the first video file issued by the video client. The DSP can be integrated on the GPU or independently exist with the GPU, and therefore, the process of coding and synthesizing the video file for release is realized through the special hardware DSP, so that compared with a mode of adopting software coding, the processing speed is higher, the efficiency is higher, and the user experience is better.
In an embodiment, a software coding mode may also be adopted to implement synthesis of a first video file issued by a video client, specifically, a CPU reads the rendered texture to a memory to obtain an image (buffer) corresponding to the memory; and coding the image in the memory to obtain a video file for the video client to release. Therefore, the process of coding and synthesizing the video file for distribution is realized by the CPU, so that the size of the obtained video file is smaller and the occupied storage space of the mobile terminal is smaller compared with a mode of adopting hardware coding.
Based on the above embodiments of the present disclosure, in practical applications, the selection of the encoding mode corresponding to the first video file may be performed according to the needs of practical application scenarios (for example, the processing speed needs to be fast/the video file needs to be small). For example, when the requirement for timeliness is high, the video file is synthesized by a hardware encoding method, and when the requirement for video file is small, the video file is synthesized by a software encoding method.
In practical application, a corresponding priority can be set for a coding mode of a first video file issued by a video client, for example, a hardware coding mode can be set to be preferentially adopted, during practical implementation, a DSP for coding rendered textures is initialized, whether the initialization is successful or not is judged, and when the DSP is initialized successfully (namely, the DSP is started successfully), a hard coding mode is adopted, namely, the rendered textures are coded by the DSP, so that the first video file issued by the video client is obtained; and when the DSP fails to initialize, a soft coding mode with lower priority is adopted, namely a CPU is adopted to code the rendered texture, so that a first video file issued by the video client is obtained. Therefore, the synthesis speed of the first video file is preferentially ensured, and the smooth synthesis of the first video file is ensured under the condition that the hardware reliability is low and the synthesis speed cannot be ensured, so that a user always has good use experience.
In practical application, after obtaining the first video file, the mobile terminal may upload the first video file to the server based on the video client to implement publishing of the first video file, and when the user triggers an obtaining instruction (for example, clicks on relevant information (link, icon, etc.) corresponding to the first video file) through the video client in the mobile terminal, the user requests and obtains the first video file from the server to play the first video file.
Step 304: and carrying out watermarking processing on the first video file to obtain a second video file which is used for sharing to other video clients and added with watermarks.
In the embodiment of the present disclosure, the second video file with the added watermark is obtained based on the first video file, that is, a serial processing scheme is adopted for synthesizing the first video file issued by the video client and the second video file with the added watermark for sharing to other video clients, so that two video files (the special effect video file without the added watermark and the video file with the added watermark) with different purposes can be synthesized in one processing process, the time required for generating each video file individually is greatly shortened, and the user experience is improved, and in one embodiment, the second video file with the added watermark for sharing to other video clients can be synthesized in the following manner:
decoding the first video file to obtain a video frame image of the first video file; rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark; and coding the texture added with the watermark by adopting a hardware coding mode to obtain a second video file which is used for sharing to other video clients and is added with the watermark.
In actual implementation, decoding the first video file to obtain a video frame image of the first video file in the memory, and loading the video frame image of the first video file to the GPU to obtain corresponding textures; rendering the obtained texture by adopting a GPU to obtain the rendered texture; the GPU renders the rendered texture to obtain the texture added with the watermark; and coding the texture added with the watermark by adopting a coding mode of a hardware DSP to obtain a second video file which is used for sharing to other video clients and is added with the watermark. Therefore, when a user shares a video file, for example, the video client a obtains the video file added with the watermark, and the user shares the video file added with the watermark to the video client B through the video client a.
Fig. 5 is a schematic flowchart of a method for processing a video file according to an embodiment of the present disclosure, and fig. 6 is another schematic flowchart of the method for processing a video file according to an embodiment of the present disclosure, where the method is applied to a mobile terminal, and the mobile terminal is provided with a video client, and with reference to fig. 5 and fig. 6, the method for processing a video file according to an embodiment of the present disclosure includes:
step 501: and the mobile terminal receives the video synthesis instruction.
In practical application, referring to fig. 4, when a user finishes recording a video based on a video client, a "publish" button is clicked under the condition of checking "save local", and a video composition instruction is triggered, where the video composition instruction instructs to compose a video file for publishing by the video client, and a video file added with a watermark for saving and/or sharing to other video clients.
Step 502: and responding to the video synthesis instruction, and acquiring a video file obtained based on a video client.
Here, in practical applications, the original video file recorded by the user through the video client is based on the video file obtained by the video client.
Step 503: and decoding the obtained video file to obtain a corresponding video frame image.
In one embodiment, the mobile terminal may obtain the video frame image by:
based on the packaging format of the video file, the video file obtained based on the video client is decoded to obtain a video frame image (buffer) in the memory, that is, the buffer is obtained by decoding the original video in fig. 6.
Step 504: and rendering the video frame image to a texture to obtain a corresponding rendered texture.
In an embodiment, obtaining the rendered texture may be achieved by:
loading the video frame image in the memory to a GPU to obtain corresponding texture (texture); and rendering the obtained texture by using a GPU to obtain the rendered texture, namely loading the buffer to the GPU in the figure 6 to obtain the original texture, and rendering the original texture to obtain the rendered texture. The rendering described herein is used to add a special effect or a filter to a texture, and thus, a texture obtained by rendering the texture is a texture to which the special effect/the filter is added.
Step 505: and coding the rendered texture to obtain a first video file for video client release.
Here, the texture after rendering may be encoded in a hardware encoding manner or a software encoding manner, and when the hardware encoding manner is adopted and in actual implementation, the texture after rendering is encoded by using a DSP integrated in a GPU, so as to obtain a first video file for the video client to issue, that is, the texture after rendering is encoded in fig. 6, so as to obtain a composite video file; when a software coding mode is adopted, in actual implementation, a CPU reads the rendered texture to a memory to obtain an image corresponding to the memory; and the CPU encodes the image in the memory to obtain a first video file for the video client to publish.
Step 506: and decoding the first video file to obtain a video frame image of the first video file.
Here, the first video file is decoded in the same manner as the video file obtained by the video client.
Step 507: and rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark.
In actual implementation, loading a video frame image of a first video file in a memory to a GPU to obtain corresponding texture (texture); and rendering the obtained texture by adopting a GPU to obtain the texture added with the watermark.
Step 508: and coding the texture added with the watermark to obtain a second video file which is used for sharing to other video clients and added with the watermark.
In practical application, the texture added with the watermark may be encoded in a hardware encoding manner, for example, the texture added with the watermark is encoded by using a DSP, so as to obtain a video file added with the watermark for sharing to other video clients.
By applying the embodiment of the disclosure, two video files (the first video file and the second video file) with different purposes can be obtained when one processing flow is finished, and compared with the method for respectively and independently obtaining the two video files, the method is short in time consumption and good in user experience.
In practical applications, after obtaining the first video file and the second video file, the mobile terminal may upload the obtained video files to the server based on the video client, so as to implement publishing of the video files, and when the user triggers an obtaining instruction through the video client (e.g., clicks on relevant information (links, icons, etc.) of the corresponding video files), the mobile terminal requests the server and obtains the video files for playing.
In an embodiment, after obtaining the second video file, the mobile terminal stores the second video file to a local of the mobile terminal, so that when receiving a sharing instruction (used for instructing to share the second video file to other video clients) triggered by a user, the mobile terminal shares the second video file to other video clients through the video clients.
In an embodiment, after obtaining the second video file, the mobile terminal may further present sharing prompt information to the user in an interface of the video client, and share the second video file when receiving a confirmation message triggered by the user.
Fig. 7 is a schematic diagram of a composition structure of a video file processing apparatus according to an embodiment of the present disclosure, and referring to fig. 7, the video file processing apparatus according to the embodiment of the present disclosure includes:
a first decoding unit 71, configured to decode a video file obtained by a video client based on a mobile terminal to obtain a corresponding video frame image;
a first rendering unit 72, configured to render the video frame image to a texture to obtain a corresponding rendered texture;
a first encoding unit 73, configured to encode, based on the rendered texture, a first video file for the video client to publish;
and a watermarking unit 74, configured to perform watermarking processing on the first video file to obtain a second video file with a watermark for sharing to another video client.
In one embodiment, the first encoding unit is implemented by using a DSP;
the first encoding unit is further configured to encode the rendered texture in response to an encoding instruction sent by a Graphics Processing Unit (GPU), so as to obtain a first video file for the video client to issue.
In one embodiment, the first encoding unit is implemented by a CPU;
the first encoding unit is further configured to read the rendered texture to a memory to obtain an image corresponding to the memory;
and coding the image in the memory to obtain a first video file for the video client to publish.
In an embodiment, the apparatus further comprises an initialization unit;
the initialization unit is used for initializing a DSP (digital signal processor) used for encoding the rendered texture;
when the initialization is determined to be successful, calling the DSP to encode the rendered texture to obtain a first video file issued by the video client;
and when the initialization is determined to fail, triggering a CPU to encode the rendered texture to obtain a first video file issued by the video client.
In one embodiment, the watermarking unit includes:
the second decoding unit is used for decoding the first video file to obtain a video frame image of the first video file;
the second rendering unit is used for rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark;
and the second encoding unit is used for encoding the texture added with the watermark in a hardware encoding mode to obtain a second video file added with the watermark and used for sharing to other video clients.
In an embodiment, the first rendering unit is further configured to load the video frame image in the memory to a GPU to obtain a corresponding texture;
rendering the obtained texture to obtain the rendered texture.
Correspondingly, the embodiment of the disclosure also provides a storage medium, which stores executable instructions, and when the executable instructions are executed, the storage medium is used for implementing the processing method of the video file provided by the embodiment of the disclosure.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present disclosure, and shall cover the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method for processing a video file, the method comprising:
decoding a video file obtained by a video client based on a mobile terminal to obtain a corresponding video frame image;
rendering the video frame image to texture to obtain corresponding rendered texture;
coding to obtain a first video file for the video client to issue based on the rendered texture;
decoding the first video file to obtain a video frame image of the first video file;
rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark;
and coding the texture added with the watermark by adopting a hardware coding mode to obtain a second video file which is used for sharing to other video clients and is added with the watermark.
2. The method of claim 1, wherein said encoding a first video file for publication by the video client based on the rendered texture comprises:
and responding to a coding instruction sent by a GPU (graphics processing Unit), and coding the rendered texture by adopting a DSP (digital signal processor) to obtain a first video file issued by the video client.
3. The method of claim 1, wherein said encoding a first video file for publication by the video client based on the rendered texture comprises:
reading the rendered texture to a memory by a Central Processing Unit (CPU) to obtain an image corresponding to the memory;
and coding the image in the memory by adopting a CPU (Central processing Unit) to obtain a video file for the video client to release.
4. The method of claim 1, wherein said encoding a first video file for publication by the video client based on the rendered texture comprises:
initializing a DSP for encoding the rendered texture;
when the initialization is determined to be successful, the rendered texture is encoded by the DSP, and a first video file issued by the video client is obtained;
and when the initialization is determined to fail, adopting a CPU to encode the rendered texture to obtain a first video file issued by the video client.
5. The method of any of claims 1 to 4, wherein said rendering said video frame image to a texture resulting in a corresponding rendered texture comprises:
loading the video frame image in the memory to a GPU to obtain corresponding texture;
and rendering the obtained texture by adopting a GPU to obtain the rendered texture.
6. An apparatus for processing a video file, the apparatus comprising:
the first decoding unit is used for decoding a video file obtained by a video client based on the mobile terminal to obtain a corresponding video frame image;
the first rendering unit is used for rendering the video frame image to texture to obtain corresponding rendered texture;
a first encoding unit, configured to encode, based on the rendered texture, a first video file for the video client to publish;
the second decoding unit is used for decoding the first video file to obtain a video frame image of the first video file;
the second rendering unit is used for rendering the video frame image of the first video file to a texture to obtain the texture added with the watermark;
the second coding unit is used for coding the texture added with the watermark in a hardware coding mode to obtain a second video file added with the watermark and used for sharing to other video clients;
and the watermarking unit is used for performing watermarking processing on the first video file to obtain a second video file which is used for sharing to other video clients and is added with watermarks.
7. The apparatus of claim 6, wherein the first encoding unit is implemented using a Digital Signal Processor (DSP);
the first encoding unit is further configured to encode the rendered texture in response to an encoding instruction sent by a Graphics Processing Unit (GPU), so as to obtain a first video file for the video client to issue.
8. The apparatus of claim 6, wherein the first encoding unit is implemented using a Central Processing Unit (CPU);
the first encoding unit is further configured to read the rendered texture to a memory to obtain an image corresponding to the memory;
and coding the image in the memory to obtain a first video file for the video client to publish.
9. The apparatus of claim 6, wherein the apparatus further comprises an initialization unit;
the initialization unit is used for initializing a DSP (digital signal processor) used for encoding the rendered texture;
when the initialization is determined to be successful, calling the DSP to encode the rendered texture to obtain a first video file issued by the video client;
and when the initialization is determined to fail, triggering a CPU to encode the rendered texture to obtain a first video file issued by the video client.
10. The device according to any of the claims 6 to 9,
the first rendering unit is further configured to load the video frame image in the memory to a GPU to obtain a corresponding texture;
rendering the obtained texture to obtain the rendered texture.
11. A mobile terminal, comprising:
a memory for storing executable instructions;
a processor for implementing the method of processing a video file as claimed in any one of claims 1 to 5 when executing executable instructions stored in the memory.
12. A storage medium storing executable instructions for implementing a method of processing a video file as claimed in any one of claims 1 to 5 when executed.
CN201811574273.1A 2018-12-21 2018-12-21 Video file processing method and device, mobile terminal and storage medium Active CN111355978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811574273.1A CN111355978B (en) 2018-12-21 2018-12-21 Video file processing method and device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811574273.1A CN111355978B (en) 2018-12-21 2018-12-21 Video file processing method and device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111355978A CN111355978A (en) 2020-06-30
CN111355978B true CN111355978B (en) 2022-09-06

Family

ID=71198017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811574273.1A Active CN111355978B (en) 2018-12-21 2018-12-21 Video file processing method and device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111355978B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4222983A1 (en) * 2020-09-30 2023-08-09 Snap Inc. Generating media content items for sharing to external applications
CN114845162B (en) * 2021-02-01 2024-04-02 北京字节跳动网络技术有限公司 Video playing method and device, electronic equipment and storage medium
CN118138702A (en) * 2021-08-12 2024-06-04 荣耀终端有限公司 Video processing method, electronic device, chip and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018024179A1 (en) * 2016-08-04 2018-02-08 腾讯科技(深圳)有限公司 Video processing method, server, terminal, and computer storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638658A (en) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 Method and system for editing audio-video
CN106506335B (en) * 2016-11-10 2019-08-30 北京小米移动软件有限公司 The method and device of sharing video frequency file
CN108282164B (en) * 2017-01-05 2022-07-15 腾讯科技(深圳)有限公司 Data encoding and decoding method and device
CN107277616A (en) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 Special video effect rendering intent, device and terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018024179A1 (en) * 2016-08-04 2018-02-08 腾讯科技(深圳)有限公司 Video processing method, server, terminal, and computer storage medium

Also Published As

Publication number Publication date
CN111355978A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111355978B (en) Video file processing method and device, mobile terminal and storage medium
CN113457160B (en) Data processing method, device, electronic equipment and computer readable storage medium
CN111447504B (en) Three-dimensional video processing method and device, readable storage medium and electronic equipment
WO2022057575A1 (en) Multimedia data publishing method and apparatus, and device and medium
CN110856036A (en) Remote desktop implementation method, interaction method, device, equipment and storage medium
CN112218148A (en) Screen recording method and device, computer equipment and computer readable storage medium
CN111355997B (en) Video file generation method and device, mobile terminal and storage medium
CN111355960B (en) Method and device for synthesizing video file, mobile terminal and storage medium
CN114268796A (en) Method and device for processing video stream
CN117065357A (en) Media data processing method, device, computer equipment and storage medium
WO2023024983A1 (en) Video recording method and device, storage medium, and program product
CN115878115A (en) Page rendering method, device, medium and electronic equipment
CN117557701A (en) Image rendering method and electronic equipment
CN111385638B (en) Video processing method and device
CN109640023B (en) Video recording method, device, server and storage medium
CN109788339B (en) Video recording method and device, electronic equipment and storage medium
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN111435995B (en) Method, device and system for generating dynamic picture
CN114845162B (en) Video playing method and device, electronic equipment and storage medium
CN111381796B (en) Processing method and device for realizing KTV function on client and user equipment
CN112887758B (en) Video processing method and device
WO2021043128A1 (en) Particle calculation method and apparatus, electronic device, and computer readable storage medium
CN118276987A (en) Driving method and device in virtual person integration and readable storage medium
KR20130032482A (en) Mobile terminal and encoding method of mobile terminal for near field communication
CN118264652A (en) Decoding method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.