CN109151567A - Video data processing method and device and computer readable storage medium - Google Patents

Video data processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN109151567A
CN109151567A CN201710465506.3A CN201710465506A CN109151567A CN 109151567 A CN109151567 A CN 109151567A CN 201710465506 A CN201710465506 A CN 201710465506A CN 109151567 A CN109151567 A CN 109151567A
Authority
CN
China
Prior art keywords
video
video data
data
processing
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710465506.3A
Other languages
Chinese (zh)
Inventor
张佳亮
翟剑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momo Information Technology Co Ltd
Original Assignee
Beijing Momo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momo Information Technology Co Ltd filed Critical Beijing Momo Information Technology Co Ltd
Priority to CN201710465506.3A priority Critical patent/CN109151567A/en
Publication of CN109151567A publication Critical patent/CN109151567A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Generation (AREA)
  • Studio Circuits (AREA)

Abstract

The invention discloses a video data processing method and device and a computer readable storage medium. The video data processing method comprises the following steps: acquiring an address of video source data, and performing texture OES texture mapping on the video source data according to the address of the video source data to obtain OES texture used for indexing the address of the video source data; reading the video source data by using the OES texture, and performing special effect processing on the video source data to obtain video data after special effect processing; and coding the video data after the special effect processing, and sending the coded video data. By using the video data processing method in the embodiment of the invention, special effect processing can be efficiently carried out on the video to be shared.

Description

Video data processing method and device and computer readable storage medium
Technical Field
The present invention relates to the field of video technologies, and in particular, to a method and an apparatus for processing video data, and a computer-readable storage medium.
Background
Currently, with the development of mobile internet, video sharing on the internet has become a new form of information dissemination. In order to improve the video experience of the user, special effect processing is generally required to be performed on the video to be shared. The special effect processing of the video to be shared refers to editing, rendering, mixing and the like of the video.
In the prior art, when a video to be shared is subjected to special effect processing, a video processor needs to perform access operation on video source data. However, since the data size of the video source data is large, a large amount of memory resources and network transmission resources of the video processor are required to be occupied when the video source data is accessed, thereby reducing the performance of the video processor in multitask parallel processing.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for processing video data, and a computer-readable storage medium, which can perform special effect processing on a video to be shared without performing access operation on video source data, thereby improving performance of multitask parallel processing of a video processor.
In a first aspect, an embodiment of the present invention provides a method for processing video data, including:
acquiring an address of video source data, and performing texture OES texture mapping on the video source data according to the address of the video source data to obtain OES texture used for indexing the address of the video source data;
reading the video source data by using the OES texture, and performing special effect processing on the video source data to obtain video data after special effect processing;
and coding the video data after the special effect processing, and sending the coded video data.
In some embodiments of the first aspect, said performing OES texture mapping on the video source data according to the address of the video source data to obtain OES texture for indexing the address of the video source data comprises:
establishing OES texture;
and writing the address of the video source data into the newly-built OES texture to obtain the OES texture for indexing the address of the video source data.
In some embodiments of the first aspect, said reading said video source data using said OES texture comprises:
reading an address of the video source data from the OES texture;
and reading the video source data from the address of the video source data.
In some embodiments of the first aspect, the performing special effect processing on the video source data to obtain video data after special effect processing includes:
editing and/or rendering the video source data to obtain video data after editing and/or rendering;
and taking the video data after the editing processing and/or the rendering processing as the video data after the special effect processing.
In some embodiments of the first aspect, the video source data comprises source data for two or more videos;
the performing special effect processing on the video source data to obtain video data after special effect processing includes:
reading addresses of source data of respective videos from OES textures for indexing addresses of source data of corresponding videos, respectively;
respectively reading the source data of each video from the address of the source data of each video;
and performing screen mixing processing on the source data of each video to obtain video data subjected to screen mixing processing, and taking the video data subjected to screen mixing processing as the video data subjected to special effect processing.
In a second aspect, an embodiment of the present invention provides a method for processing video data, including:
decoding the received encoded video data to obtain decoded video data;
acquiring the address of the decoded video data, and performing surface texture mapping on the decoded video data according to the address of the decoded video data to obtain surface texture for indexing the address of the decoded video data;
reading the decoded video data by using the surface texture, and performing special effect processing on the decoded video data to obtain video data after special effect processing;
and playing the video data after the special effect processing.
In some embodiments of the second aspect, the performing surface texture mapping on the decoded video data according to the address of the decoded video data to obtain a surface texture for indexing the address of the decoded video data includes:
newly establishing a surface texture;
and writing the address of the decoded video data into a newly-created surface texture to obtain the surface texture for indexing the address of the decoded video data.
In a third aspect, an embodiment of the present invention provides an apparatus for processing video data, which includes a first mapping module, a first processing module, and an encoding module. Wherein,
the first mapping module is used for acquiring an address of video source data, and performing OES texture mapping on the video source data according to the address of the video source data to obtain OEStetexture for indexing the address of the video source data;
the first processing module is used for reading the video source data by using the OES texture and carrying out special effect processing on the video source data to obtain video data after special effect processing;
and the coding module is used for coding the video data after the special effect processing and sending the coded video data.
In some embodiments of the third aspect, the first mapping module includes a first reconstruction unit and a first write unit. Wherein,
the first newly-built unit is used for newly building OES texture;
the first writing unit is configured to write an address of the video source data into a newly created OES texture, so as to obtain an OES texture used for indexing the address of the video source data.
In some embodiments of the third aspect, the first processing module comprises a first reading unit, a second reading unit and a first processing unit. Wherein,
the first reading unit is used for reading the address of the video source data from the OES texture;
the second reading unit is used for reading the video source data from the address of the video source data;
the first processing unit is used for carrying out special effect processing on the video source data to obtain video data after the special effect processing.
In some embodiments of the third aspect, the first processing module further comprises an editing unit and/or a rendering unit. Wherein,
the editing unit is used for editing the video source data to obtain video data after editing, and taking the video data after editing as the video data after special effect processing;
and the rendering unit is used for rendering the video source data to obtain rendered video data, and taking the rendered video data as the video data after the special effect processing.
In some embodiments of the third aspect, the first processing module further comprises a mixing unit comprising a first reading sub-unit, a second reading sub-unit, and a mixing sub-unit;
the first reading subunit is configured to read addresses of video source data belonging to the respective videos from OES textures of video source data of two or more videos;
the second reading subunit is configured to read the source data of each video from the address of the source data of each video respectively;
and the frequency mixing subunit is configured to perform screen mixing processing on the source data of each video to obtain video data after the screen mixing processing, and use the video data after the screen mixing processing as the video data after the special effect processing.
In some embodiments of the third aspect, the apparatus for processing video data further comprises a decoding module, a second mapping module, a second processing module, and a playing module. Wherein,
the decoding module is used for decoding the received encoded video data to obtain decoded video data;
the second mapping module is configured to obtain an address of the decoded video data, and perform surface texture mapping on the decoded video data according to the address of the decoded video data to obtain a surface texture used for indexing the address of the decoded video data;
the second processing module is configured to read the decoded video data by using the surface texture, and perform special effect processing on the decoded video data to obtain video data after special effect processing;
and the playing module is used for playing the video data after the special effect processing.
In some embodiments of the third aspect, the second mapping module includes a second new unit and a second write unit. Wherein,
the second new building unit is used for building a surface texture;
and the second writing unit is used for writing the address of the decoded video data into a newly-created surface texture to obtain a surface texture for indexing the address of the decoded video data.
In a fourth aspect, an embodiment of the present invention provides an apparatus for processing video data, including a memory, a processor, and a program stored in the memory and executable on the processor, where the processor executes the program to implement the method described in the foregoing embodiments.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a program is stored, the program, when executed by a processor, implementing the method described in the above embodiments.
According to the embodiment of the invention, when the special effect processing is carried out on the video to be shared, the OES texture or the surface texture for indexing the address of the video data can be obtained by acquiring the address of the video data and carrying out texture OES texture mapping or surface texture mapping on the video data according to the address of the video data. Then, by using OES texture or surface texture, video data can be read and special effects processing can be performed on the read video data.
As can be seen from the above description, the information transmitted in the embodiment of the present invention is not video data, but an address of the video data. When the video to be shared is subjected to special effect processing, the video data to be processed can be indexed according to the address of the video data without performing access operation on the video data, so that the memory resource of the video processor and the transmission resource of a network are not occupied, and the performance of multi-task parallel processing of the video processor can be improved.
Drawings
The embodiments of the present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters identify like or similar features.
FIG. 1 is a diagram illustrating video sharing over the Internet according to the prior art;
fig. 2 is a flowchart illustrating a method for processing video data according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for processing video data according to another embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for processing video data according to still another embodiment of the invention;
fig. 5 is a flowchart illustrating a method for processing video data according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a video data processing apparatus according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of a video data processing apparatus according to yet another embodiment of the present invention;
fig. 9 is a schematic structural diagram of a video data processing apparatus according to yet another embodiment of the present invention.
Detailed Description
Features of various aspects of embodiments of the invention and exemplary embodiments will be described in detail below. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the invention. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the embodiments of the present invention by way of illustration of the embodiments of the present invention. The embodiments of the invention are in no way limited to any specific configurations and algorithms set forth below, but rather cover any modifications, alterations, and adaptations of the elements, components, and algorithms without departing from the spirit of the embodiments of the invention. In the drawings and the following description, well-known structures and techniques are not shown in order to avoid unnecessarily obscuring the embodiments of the invention.
The method and the device for processing the video data are applied to video sharing operation on the Internet. The special effect processing of the video to be shared can be realized under the condition of saving the memory resource and the network transmission resource of the video processor. The video after special effect processing can be used for being sent to a network or played at a client.
Fig. 1 is a schematic diagram of video sharing over the internet in the prior art. Referring to fig. 1, in an application scenario, a user a records a video by using a video client, performs special effect processing on the recorded video, and then shares the video after the special effect processing to a network. In another application scenario, the user B uses the video client to play the shared video in the network, and before playing, special effect processing needs to be performed on the shared video.
The following describes the technical solution in the embodiment of the present invention in detail, respectively for two application scenarios.
Fig. 2 is a flowchart illustrating a method for processing video data according to an embodiment of the present invention. The method for processing the video data is used for a video transmitting end. As shown in fig. 2, the method for processing video data includes steps 201 to 203.
In step 201, an address of video source data is obtained, and OES texture mapping is performed on the video source data according to the address of the video source data to obtain OES texture for indexing the address of the video source data.
The video source data may be YUV format data. OES texture mapping is one of texture mapping methods. OES texture can map address information of video source data to texture data and can deliver the texture data carrying the address information of the video source data to a specified location. Such as a video processor, to perform special effects processing on the video source data.
OES texture mapping may be done over the Open GL ES subset of Open GL (Open Graphics Library). Open GL is a professional graphical program interface, defines a cross-programming language and cross-platform programming interface, can be used for three-dimensional graphics and two-dimensional graphics, and can conveniently call a bottom-layer graphics library.
In step 202, the OES texture is used to read the video source data and perform special effect processing on the video source data to obtain video data after the special effect processing.
In step 203, the video data after the special effect processing is encoded, and the encoded video data is transmitted.
The encoded video data may be H264 format data. The H264 format data realizes the video data sharing on the network through the H264 data transmission protocol.
According to the embodiment of the invention, when the special effect processing is carried out on the video to be shared, the OES texture used for indexing the address of the video source data can be obtained by acquiring the address of the video source data and according to the mapped OES texture. Then, by using OES texture, it is possible to read video source data and perform special effect processing on the read video source data.
As can be seen from the above description, in the embodiment of the present invention, the video data is not directly transmitted, but the address of the video data. When the video to be shared is subjected to special effect processing, the video data to be processed can be indexed according to the address of the video data without performing access operation on the video data, so that the memory resource of the video processor and the transmission resource of a network are not occupied, and the performance of multi-task parallel processing of the video processor can be improved.
In addition, when the video to be shared is subjected to special effect processing, the video data to be processed can be indexed according to the address of the video data without performing access operation on the video data, so that the speed of the special effect processing on the video data is increased.
In addition, since the processing method of the video data in the embodiment of the present invention needs to be implemented by the Open GL standard, and any android system supports the Open GL standard, the processing method of the video data in the embodiment of the present invention can be adapted to android devices of any model.
It should be noted that the video data processing method in the embodiment of the present invention is not limited to the video to be shared, and can be applied to any video that needs to be processed.
Fig. 3 is a flowchart illustrating a method for processing video data according to another embodiment of the present invention. The method for processing the video data is used for a video transmitting end. The difference between fig. 3 and fig. 2 is that the step 201 in fig. 2 can be subdivided into the step 2011 and 2013 in fig. 3, and the step 202 in fig. 2 can be subdivided into the step 2021 and 2023 in fig. 3.
In step 2011, a source address of the video data is obtained.
In step 2012, an OES texture is newly created.
As one approach, the address of the source data is mapped into a two-dimensional array, with each entry in the array called a texture point (texel). Alternatively, the address of the source data may be mapped onto a non-two-dimensional object, such as a sphere or other 3D object model.
In step 2013, the address of the video source data is written into the newly created OES texture, resulting in OES texture for indexing the address of the video source data.
In step 2021, from the OES texture, the address of the video source data is read.
The OES texture carries texture data of address information of the video source data.
In step 2021, the video source data is read from the address of the video source data.
In step 2022, the video source data is edited and/or rendered to obtain edited and/or rendered video data. And taking the video data after the editing processing and/or the rendering processing as the video data after the special effect processing.
The editing process and/or the rendering process are two processing modes of special effect processing. Regarding the processing method of the editing process and/or the rendering process, those skilled in the art can refer to the prior art in the related art, and will not be described in detail here.
Another processing method of special effects processing is screen mixing processing, which usually requires mixing processing of source data of two or more than three videos.
In order to perform special effect processing on video source data, the address of the source data of each video can be read from two or more OES textures used for indexing the address of the source data of the corresponding video; respectively reading the source data of each video from the address of the source data of each video; and performing screen mixing processing on the source data of each video to obtain video data after the screen mixing processing, and taking the video data after the screen mixing processing as video data after special effect processing.
Fig. 4 is a flowchart illustrating a method for processing video data according to another embodiment of the present invention. The method for processing the video data is used for a video receiving end. As shown in fig. 4, the method of processing video data includes steps 401 to 404.
In step 401, the received encoded video data is decoded to obtain decoded video data.
In step 402, an address of the decoded video data is obtained, and surface texture mapping is performed on the decoded video data according to the address of the decoded video data, so as to obtain surface texture for indexing the address of the decoded video data.
The surface texture mapping is also a type of texture mapping. The surface texture can map address information of the decoded video data into texture data, and can transfer the texture data carrying the address information of the decoded video data to a specified position, such as a video processor, so as to perform special effect processing on the decoded video data. surface texture mapping can also be done through Open GL.
In step 403, the decoded video data is read by using the surface texture, and the decoded video data is subjected to special effect processing to obtain video data subjected to special effect processing.
In step 404, the video data after the special effect processing is played.
According to the embodiment of the invention, when special effect processing is carried out on the video to be shared, the surface texture used for indexing the address of the decoded video data can be obtained by acquiring the address of the decoded video data and according to the mapped surface texture. Then, the decoded video data can be read by using the surface texture, and the read decoded video data can be subjected to special effect processing.
Therefore, when the special effect processing is performed on the video to be shared, the transmitted information is not the video data, but the address of the video data. When the video to be shared is subjected to special effect processing, the video data to be processed can be indexed according to the address of the video data without performing access operation on the video data, so that the memory resource of the video processor and the transmission resource of a network are not occupied, and the performance of multi-task parallel processing of the video processor can be improved.
Fig. 5 is a flowchart illustrating a method for processing video data according to another embodiment of the present invention. The method for processing the video data is used for a video receiving end. FIG. 5 differs from FIG. 4 in that step 401 in FIG. 4 can be further detailed as steps 4021 and 4023 in FIG. 5.
In step 4021, the address of the decoded video data is acquired.
In step 4022, a surface texture is created.
In step 4023, the address of the decoded video data is written in the new surface texture to obtain the surface texture for indexing the address of the decoded video data.
Similarly, as a way to map the address of the source data into a two-dimensional array, each entry in the array is called a texture point (texel). Alternatively, the address of the source data may be mapped onto a non-two-dimensional object, such as a sphere or other 3D object model.
It should be noted that the special effect processing on the decoded video data in the video receiving end also includes editing processing and/or rendering processing. Specifically, the video data after decoding is subjected to editing processing and/or rendering processing to obtain the video data after editing processing and/or rendering processing. And taking the video data after the editing processing and/or the rendering processing as the video data after the special effect processing.
The special effect processing on the decoded video data in the video receiving end also includes special mixing screen processing, and usually, two or more than three decoded video data need to be subjected to mixing processing.
In order to perform special effect processing on the decoded video data, the address of the decoded video data of each video can be read from two or more than three surface textures used for indexing the address corresponding to the decoded video data; reading the decoded video data of each video from the address of the decoded video data of each video; and performing screen mixing processing on the decoded video data of each video to obtain video data subjected to screen mixing processing, and taking the video data subjected to screen mixing processing as video data subjected to special effect processing.
Fig. 6 is a device for processing video data according to an embodiment of the present invention. As shown in fig. 6, the video data processing apparatus includes: a first mapping module 501, a first processing module 502 and an encoding module 503.
The first mapping module 501 is configured to obtain an address of video source data, and perform OES texture mapping on the video source data according to the address of the video source data to obtain an OES texture for indexing the address of the video source data.
The first processing module 502 is configured to read video source data by using OEStexture, and perform special effect processing on the video source data to obtain video data after the special effect processing.
The encoding module 503 is configured to encode the video data after the special effect processing, and send the encoded video data.
When performing special effect processing on a video to be shared, the first processing module 502 can obtain OES texture for indexing an address of video source data according to the mapped OES texture by obtaining an address of the video source data in the first mapping module 501. Then, by using OES texture, it is possible to read video source data and perform special effect processing on the read video source data.
As can be seen from the above description, the information transmitted in the embodiment of the present invention is not video data, but an address of the video data. When the video to be shared is subjected to special effect processing, the video data to be processed can be indexed according to the address of the video data without performing access operation on the video data, so that the memory resource of the video processor and the transmission resource of a network are not occupied, and the performance of multi-task parallel processing of the video processor can be improved.
Fig. 7 is a device for processing video data according to another embodiment of the present invention. FIG. 7 differs from FIG. 6 in that the first mapping module 501 in FIG. 6 can be refined to the first new building unit 5011 and the first writing unit 5012 in FIG. 7; the first processing module 502 in fig. 6 may be subdivided into the first reading unit 5021, the second reading unit 5022, and the first processing unit 5023 in fig. 7.
Among them, the first newly building unit 5011 is used to build OES texture.
The first writing unit 5012 writes the address of the video source data in the newly created OES texture, resulting in OES texture for indexing the address of the video source data.
The first reading unit 5021 is used for reading the address of the video source data from the OES texture.
The second reading unit 5022 is used for reading the video source data from the address of the video source data.
The first processing unit 5023 is configured to perform special effect processing on the video source data to obtain video data after the special effect processing.
Fig. 8 is a device for processing video data according to still another embodiment of the present invention. Fig. 8 is different from fig. 6 in that the first processing module 502 in fig. 6 may be refined into the editing unit 5024, the rendering unit 5025, and the mixing unit 5026 in fig. 8.
The editing unit 5024 is configured to perform editing processing on the video source data to obtain video data after the editing processing, and use the video data after the editing processing as video data after special effect processing.
The rendering unit 5025 is configured to perform rendering processing on the video source data to obtain video data after the rendering processing, and use the video data after the rendering processing as video data after special effect processing.
The mixing unit 5026 further includes a first reading sub-unit, a second reading sub-unit and a mixing sub-unit.
The first reading subunit is used for respectively reading addresses of the video source data belonging to each video from OEStexture of the video source data of two or more videos.
And the second reading subunit is used for reading the source data of each video from the address of the source data of each video respectively.
And the frequency mixing subunit is used for performing screen mixing processing on the source data of each video to obtain video data subjected to screen mixing processing, and taking the video data subjected to screen mixing processing as video data subjected to special effect processing.
Fig. 9 is a device for processing video data according to another embodiment of the present invention. Fig. 9 is different from fig. 6 in that fig. 9 further includes a processing device of video data at a video receiving end. The video data processing device at the video receiving end comprises: including a decoding module 504, a second mapping module 505, a second processing module 506, and a playing module 507.
The decoding module 504 is configured to decode the received encoded video data to obtain decoded video data.
The second mapping module 505 is configured to obtain an address of the decoded video data, and perform surface texture mapping on the decoded video data according to the address of the decoded video data to obtain a surface texture for indexing the address of the decoded video data.
The second processing module 506 is configured to read the decoded video data by using the surface texture, and perform special effect processing on the decoded video data to obtain video data after the special effect processing.
The playing module 507 is configured to play the video data after the special effect processing.
When performing special effect processing on the video to be shared, the second processing module 506 can obtain the surface texture used for indexing the address of the decoded video data according to the mapped surface texture by obtaining the address of the decoded video data in the second mapping module 505. Then, the decoded video data can be read by using the surface texture, and the read decoded video data can be subjected to special effect processing.
As can be seen from the above description, the information transmitted in the embodiment of the present invention is not video data, but an address of the video data. When the video to be shared is subjected to special effect processing, the video data to be processed can be indexed according to the address of the video data without performing access operation on the video data, so that the memory resource of the video processor and the transmission resource of a network are not occupied, and the performance of multi-task parallel processing of the video processor can be improved.
Referring to fig. 9, the second mapping module 505 can be subdivided into a second new unit 5051 and a second write unit 5052.
Therein, a second new unit 5051 is used for creating the surface texture.
A second writing unit 5052, configured to write the address of the decoded video data into the newly created surface texture, to obtain a surface texture for indexing the address of the decoded video data.
It should be noted that the video data processing apparatus provided in the embodiments of the present invention includes a memory, a processor, and a program stored in the memory and executable on the processor, and the processor is configured to implement the method described above when executing the program.
It should be further noted that the embodiment of the present invention also provides a computer readable storage medium, on which a program is stored, and the program is used for implementing the method described above when being executed by a processor.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts in the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. For the device embodiments, reference may be made to the description of the method embodiments in the relevant part. Embodiments of the invention are not limited to the specific steps and structures described above and shown in the drawings. Those skilled in the art may make various changes, modifications and additions to, or change the order between the steps, after appreciating the spirit of the embodiments of the invention. Also, a detailed description of known process techniques is omitted herein for the sake of brevity.
The functional blocks shown in the above structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of an embodiment of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.

Claims (16)

1. A method for processing video data, comprising:
acquiring an address of video source data, and performing texture OEStexture mapping on the video source data according to the address of the video source data to obtain an OES texture for indexing the address of the video source data;
reading the video source data by using the OES texture, and performing special effect processing on the video source data to obtain video data after special effect processing;
and coding the video data after the special effect processing, and sending the coded video data.
2. The processing method according to claim 1, wherein said performing OES texture mapping on the video source data according to the address of the video source data to obtain oestetexture for indexing the address of the video source data comprises:
establishing OES texture;
and writing the address of the video source data into the newly-built OES texture to obtain the OES texture for indexing the address of the video source data.
3. The processing method according to claim 1, wherein said reading the video source data using the OES texture comprises:
reading an address of the video source data from the OES texture;
and reading the video source data from the address of the video source data.
4. The processing method according to claim 1, wherein performing special effect processing on the video source data to obtain video data after special effect processing comprises:
editing and/or rendering the video source data to obtain video data after editing and/or rendering;
and taking the video data after the editing processing and/or the rendering processing as the video data after the special effect processing.
5. The processing method according to claim 1, wherein the video source data comprises source data of two or more videos;
the performing special effect processing on the video source data to obtain video data after special effect processing includes:
reading addresses of source data of respective videos from OES textures for indexing addresses of source data of corresponding videos, respectively;
respectively reading the source data of each video from the address of the source data of each video;
and performing screen mixing processing on the source data of each video to obtain video data subjected to screen mixing processing, and taking the video data subjected to screen mixing processing as the video data subjected to special effect processing.
6. A method for processing video data, comprising:
decoding the received encoded video data to obtain decoded video data;
acquiring the address of the decoded video data, and performing surface texture mapping on the decoded video data according to the address of the decoded video data to obtain surface texture for indexing the address of the decoded video data;
reading the decoded video data by using the surface texture, and performing special effect processing on the decoded video data to obtain video data after special effect processing;
and playing the video data after the special effect processing.
7. The processing method according to claim 6, wherein said performing surface texture mapping on the decoded video data according to the address of the decoded video data to obtain a surface texture for indexing the address of the decoded video data comprises:
newly establishing a surface texture;
and writing the address of the decoded video data into a newly-created surface texture to obtain the surface texture for indexing the address of the decoded video data.
8. A video data processing apparatus comprising a first mapping module, a first processing module and an encoding module, wherein,
the first mapping module is used for acquiring an address of video source data, and performing OES texture mapping on the video source data according to the address of the video source data to obtain OES texture for indexing the address of the video source data;
the first processing module is used for reading the video source data by using the OES texture and carrying out special effect processing on the video source data to obtain video data after special effect processing;
and the coding module is used for coding the video data after the special effect processing and sending the coded video data.
9. The apparatus for processing video data according to claim 8, wherein the first mapping module comprises a first reconstruction unit and a first writing unit; wherein,
the first newly-built unit is used for newly building OES texture;
the first writing unit is configured to write an address of the video source data into a newly created OES texture, so as to obtain an OES texture used for indexing the address of the video source data.
10. The apparatus for processing video data according to claim 8, wherein the first processing module comprises a first reading unit, a second reading unit and a first processing unit; wherein,
the first reading unit is used for reading the address of the video source data from the OES texture;
the second reading unit is used for reading the video source data from the address of the video source data;
the first processing unit is used for carrying out special effect processing on the video source data to obtain video data after the special effect processing.
11. The apparatus for processing video data according to claim 8, wherein the first processing module further comprises an editing unit and/or a rendering unit; wherein,
the editing unit is used for editing the video source data to obtain video data after editing, and taking the video data after editing as the video data after special effect processing;
and the rendering unit is used for rendering the video source data to obtain rendered video data, and taking the rendered video data as the video data after the special effect processing.
12. The apparatus for processing video data according to claim 8, wherein the first processing module further comprises a mixing unit, the mixing unit comprising a first reading sub-unit, a second reading sub-unit and a mixing sub-unit;
the first reading subunit is configured to read addresses of video source data belonging to each of the videos from OEStexture of video source data of two or more videos, respectively;
the second reading subunit is configured to read the source data of each video from the address of the source data of each video respectively;
and the frequency mixing subunit is configured to perform screen mixing processing on the source data of each video to obtain video data after the screen mixing processing, and use the video data after the screen mixing processing as the video data after the special effect processing.
13. The apparatus for processing video data according to claim 8, wherein the apparatus for processing video data further comprises a decoding module, a second mapping module, a second processing module, and a playing module; wherein,
the decoding module is used for decoding the received encoded video data to obtain decoded video data;
the second mapping module is configured to obtain an address of the decoded video data, and perform surface texture mapping on the decoded video data according to the address of the decoded video data to obtain a surface texture used for indexing the address of the decoded video data;
the second processing module is configured to read the decoded video data by using the surface texture, and perform special effect processing on the decoded video data to obtain video data after special effect processing;
and the playing module is used for playing the video data after the special effect processing.
14. The apparatus for processing video data according to claim 13, wherein said second mapping module comprises a second new unit and a second writing unit; wherein,
the second new building unit is used for building a surface texture;
and the second writing unit is used for writing the address of the decoded video data into a newly-created surface texture to obtain a surface texture for indexing the address of the decoded video data.
15. A device for processing video data, comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor executes the program to implement the method according to any one of claims 1 to 5, or the method according to claim 6 or 7.
16. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 5, or the method according to claim 6 or 7.
CN201710465506.3A 2017-06-19 2017-06-19 Video data processing method and device and computer readable storage medium Pending CN109151567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710465506.3A CN109151567A (en) 2017-06-19 2017-06-19 Video data processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710465506.3A CN109151567A (en) 2017-06-19 2017-06-19 Video data processing method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109151567A true CN109151567A (en) 2019-01-04

Family

ID=64804347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710465506.3A Pending CN109151567A (en) 2017-06-19 2017-06-19 Video data processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109151567A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8817021B1 (en) * 2011-11-11 2014-08-26 Google Inc. System for writing, interpreting, and translating three-dimensional (3D) scenes
CN105096373A (en) * 2015-06-30 2015-11-25 华为技术有限公司 Media content rendering method, user device and rendering system
CN106127673A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and computer equipment
CN106230841A (en) * 2016-08-04 2016-12-14 深圳响巢看看信息技术有限公司 A kind of video U.S. face and the method for plug-flow in real time in network direct broadcasting based on terminal
CN106296819A (en) * 2016-08-12 2017-01-04 北京航空航天大学 A kind of panoramic video player based on Intelligent set top box

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8817021B1 (en) * 2011-11-11 2014-08-26 Google Inc. System for writing, interpreting, and translating three-dimensional (3D) scenes
CN105096373A (en) * 2015-06-30 2015-11-25 华为技术有限公司 Media content rendering method, user device and rendering system
CN106127673A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and computer equipment
CN106230841A (en) * 2016-08-04 2016-12-14 深圳响巢看看信息技术有限公司 A kind of video U.S. face and the method for plug-flow in real time in network direct broadcasting based on terminal
CN106296819A (en) * 2016-08-12 2017-01-04 北京航空航天大学 A kind of panoramic video player based on Intelligent set top box

Similar Documents

Publication Publication Date Title
CN112233217B (en) Rendering method and device of virtual scene
JP4836010B2 (en) Uniform video decoding and display
CN113457160B (en) Data processing method, device, electronic equipment and computer readable storage medium
US20160037185A1 (en) Methods and apparatuses for coding and decoding depth map
CN104780378A (en) Method, device and player for decoding video
CN114501062A (en) Video rendering coordination method, device, equipment and storage medium
CN111954051A (en) Method, equipment and system for transmitting video and audio
CN112801855B (en) Method and device for scheduling rendering task based on graphics primitive and storage medium
CN111346378B (en) Game picture transmission method, device, storage medium and equipment
WO2022116764A1 (en) Data processing method and apparatus, and communication node and storage medium
US10043234B2 (en) System and method for frame buffer decompression and/or compression
CN105141567A (en) Interactive data processing method and system of terminal application and service end and interaction method
CN112218148A (en) Screen recording method and device, computer equipment and computer readable storage medium
CN111355978B (en) Video file processing method and device, mobile terminal and storage medium
CN111447504A (en) Three-dimensional video processing method and device, readable storage medium and electronic equipment
CN114579255A (en) Image processing method, device and system of virtual machine and electronic equipment
CN112764877A (en) Method and system for communication between hardware acceleration equipment and process in docker
US20190261062A1 (en) Low latency broadcasting of game engine frames
CN113391811A (en) Function compiling method and device, electronic equipment and computer readable storage medium
CN111355997B (en) Video file generation method and device, mobile terminal and storage medium
CN111355960B (en) Method and device for synthesizing video file, mobile terminal and storage medium
CN110740138B (en) Data transmission method and device
CN109151567A (en) Video data processing method and device and computer readable storage medium
CN117065357A (en) Media data processing method, device, computer equipment and storage medium
US10798142B2 (en) Method, apparatus and system of video and audio sharing among communication devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190104