CN108965930B - Video data processing method and device - Google Patents

Video data processing method and device Download PDF

Info

Publication number
CN108965930B
CN108965930B CN201711487183.4A CN201711487183A CN108965930B CN 108965930 B CN108965930 B CN 108965930B CN 201711487183 A CN201711487183 A CN 201711487183A CN 108965930 B CN108965930 B CN 108965930B
Authority
CN
China
Prior art keywords
video
data
preset function
subtitle
unrecognized character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711487183.4A
Other languages
Chinese (zh)
Other versions
CN108965930A (en
Inventor
赵杰
朱道彦
韩杰
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201711487183.4A priority Critical patent/CN108965930B/en
Publication of CN108965930A publication Critical patent/CN108965930A/en
Application granted granted Critical
Publication of CN108965930B publication Critical patent/CN108965930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The embodiment of the invention provides a method and a device for processing video data, which are applied to a video network terminal, wherein the method comprises the following steps: when on-demand operation of a user for a target video resource is detected, requesting the target video resource from an Internet server; receiving a target video resource returned by the Internet server, and decoding the target video resource to obtain video data and first subtitle data; when the first caption data is detected to have unrecognized character codes, performing code conversion on the unrecognized character codes to obtain second caption data; and displaying the subtitle corresponding to the second subtitle data when the video data is played. By the embodiment of the invention, video resources in the internet are requested by the video networking terminal, the condition that the video networking terminal cannot identify subtitle data in the video resources is avoided, and the compatibility of the video networking terminal is improved.

Description

Video data processing method and device
Technical Field
The present invention relates to the field of video networking technologies, and in particular, to a method and an apparatus for processing video data.
Background
With the development of scientific technology, the quality of video resources is greatly improved, and if embedded subtitles are adopted in the high-quality video resources, although the problems of overlarge video size, incompatibility of players and the like can be solved, the code rate of the original video can be unconsciously damaged in the process of re-recording the video, so that the resolution of the re-recorded video is greatly lower than that of the original video, and the quality of the video resources is influenced.
At present, in order to ensure that the quality of video resources is not affected, plug-in subtitles can be usually adopted in the video resources, the quality damage of the plug-in subtitles to the video is much smaller than that of embedded subtitles, the plug-in means that a subtitle file independently operates outside the video, the resolution damage to the video is very small or even zero, but a video networking terminal cannot be compatible with all the plug-in subtitles, and when the plug-in subtitles cannot be compatible, characters lack and messy codes occur, so that the playing of the video resources is affected.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide a method and apparatus for video data processing that overcome or at least partially solve the above-mentioned problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for processing video data, which is applied to a video network terminal, and includes:
the video networking terminal requests a target video resource from an Internet server when detecting the on-demand operation of a user for the target video resource;
the video networking terminal receives a target video resource returned by the Internet server and decodes the target video resource to obtain video data and first subtitle data;
when the video network terminal detects that unrecognized character codes exist in the first caption data, performing code conversion on the unrecognized character codes to obtain second caption data;
and when the video network terminal plays the video data, displaying the caption corresponding to the second caption data.
Preferably, when detecting that the unrecognized character code exists in the first subtitle data, the video networking terminal performs code conversion on the unrecognized character code to obtain second subtitle data, where the step of obtaining the second subtitle data includes:
when the first caption data is detected to have unrecognized character codes, calling a first preset function to generate code conversion parameters aiming at the unrecognized character codes;
and inputting the code conversion parameter and the unrecognized character code into a second preset function, and performing code conversion on the unrecognized character code by the second preset function according to the code conversion parameter to obtain second caption data.
Preferably, the step of calling a first preset function to generate a transcoding parameter for the unrecognized character code when detecting that the unrecognized character code exists in the first subtitle data includes:
determining a first coding mode of the character codes which cannot be identified;
acquiring a second coding mode which can be identified by the video network terminal;
inputting the first coding mode and the second coding mode into the first preset function;
and acquiring the code conversion parameter output by the first preset function.
Preferably, before the step of inputting the transcoding parameter and the unrecognized character code into a second preset function, and performing transcoding on the unrecognized character code by the transcoding function according to the transcoding parameter to obtain second subtitle data, the method further includes:
calling the first preset function to establish an internal buffer area; the internal buffer area is used for performing code conversion on the unrecognized character codes;
after the step of inputting the transcoding parameter and the unrecognized character code into a second preset function, and performing transcoding on the unrecognized character code by the transcoding function according to the transcoding parameter to obtain second subtitle data, the method further includes:
and calling the third preset function to release the internal buffer area.
Preferably, when the video network terminal plays the video data, the step of displaying the subtitle corresponding to the second subtitle data includes:
acquiring a timestamp of currently played video data;
searching second subtitle data corresponding to the timestamp;
matching the second caption data with the corresponding caption from a preset local character set;
and displaying the subtitles.
The embodiment of the invention also discloses a device for processing video data, which is applied to the video networking terminal and comprises the following components:
the target video resource request module is used for requesting the target video resource from an Internet server when the on-demand operation of a user for the target video resource is detected;
the target video resource decoding module is used for receiving the target video resource returned by the Internet server and decoding the target video resource to obtain video data and first subtitle data;
a second caption data obtaining module, configured to perform code conversion on an unrecognized character code when it is detected that the unrecognized character code exists in the first caption data, to obtain second caption data;
and the subtitle display module is used for displaying the subtitle corresponding to the second subtitle data when the video data is played.
Preferably, the second subtitle data obtaining module includes:
the first preset function calling sub-module is used for calling a first preset function to generate a code conversion parameter aiming at the unrecognized character code when the unrecognized character code is detected to exist in the first caption data;
and the second preset function calling sub-module is used for inputting the code conversion parameter and the unrecognized character code into a second preset function, and the second preset function carries out code conversion on the unrecognized character code according to the code conversion parameter to obtain second caption data.
Preferably, the first preset function call submodule includes:
a first encoding mode determining unit configured to determine a first encoding mode of the unrecognizable character encoding;
the second coding mode acquisition unit is used for acquiring a second coding mode which can be identified by the video network terminal;
a coding mode input unit, configured to input the first coding mode and the second coding mode into the first preset function;
and the code conversion parameter output unit is used for acquiring the code conversion parameter output by the first preset function.
Preferably, the method further comprises the following steps:
the internal buffer area establishing module is used for calling the first preset function to establish an internal buffer area; the internal buffer area is used for performing code conversion on the unrecognized character codes;
and the internal buffer area releasing module is used for calling the third preset function to release the internal buffer area.
Preferably, the subtitle display module includes:
the time stamp obtaining submodule is used for obtaining the time stamp of the currently played video data;
the second caption data searching submodule is used for searching the second caption data corresponding to the timestamp;
the subtitle matching sub-module is used for matching the second subtitle data with the corresponding subtitle from a preset local character set;
and the matched subtitle display sub-module is used for displaying the subtitles.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the video networking terminal detects the on-demand operation of a user on a target video resource, the video networking terminal requests the target video resource from the Internet server, then receives the target video resource returned by the Internet server, decodes the target video resource to obtain video data and first subtitle data, when the first subtitle data is detected to have unrecognizable character codes, the unrecognizable character codes are subjected to code conversion to obtain second subtitle data, and when the video data is played, subtitles corresponding to the second subtitle data are displayed, so that the on-demand of the video networking terminal on the video resource in the Internet is realized, the condition that the video networking terminal cannot recognize the subtitle data in the video resource is avoided, and the compatibility of the video networking terminal is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flow chart of steps of a method of video data processing according to an embodiment of the present invention;
FIG. 6 is a flow chart of steps in another method of video data processing according to an embodiment of the present invention;
fig. 7 is a block diagram of an apparatus for processing video data according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network technology (network technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server technology (Servertechnology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
1. Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: server, exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, code board, memory, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node server, access exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, coding board, memory, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the above characteristics of the video network, the core concept of the embodiment of the invention is provided, when detecting the on-demand operation of the user on the target video resource, the video network terminal requests the target video resource from the internet server, then receives the target video resource returned by the internet server, decodes the target video resource to obtain video data and first caption data, when detecting that the unrecognizable character code exists in the first caption data, performs code conversion on the unrecognizable character code to obtain second caption data, and when playing the video data, displays the caption corresponding to the second caption data, thereby realizing the on-demand of the video resource in the internet by the video network terminal, avoiding the situation that the video network terminal cannot recognize the caption data in the video resource, and improving the compatibility of the video network terminal.
Referring to fig. 5, a flowchart illustrating steps of a method for video data processing according to an embodiment of the present invention is shown, and the method can be applied to a video network terminal.
The terminal of the video network may include a physical terminal such as a set-top box (SetTopBox, STB), which may be called a set-top box or set-top box, and is a device for connecting a television set and an external signal source, and may convert a compressed digital signal into television content and display the television content on the television set. Generally, the set-top box may be connected to a camera and a microphone for collecting multimedia data such as video data and audio data, and may also be connected to a television for playing multimedia data such as video data and audio data.
The terminal of the video network can also comprise a virtual video network terminal, the virtual video network terminal is a terminal for accessing the video network to realize special services, and refers to special software, an environment can be created between the terminal and a terminal user, the terminal user operates the software based on the environment created by the software, and the virtual video network terminal can be realized by the software of the terminal which runs programs like a real machine.
Specifically, the embodiment of the present invention may include the following steps:
step 501, when detecting a request operation of a user for a target video resource, the video networking terminal requests the target video resource from an internet server;
in the embodiment of the invention, the video networking terminal can acquire the video resource list from the Internet server through the coordination server and display the video resource list to the user.
When the video-on-demand operation of the user aiming at the target video resource in the video resource list is detected, the video networking terminal can acquire the identifier of the target video resource, generate a target video resource acquisition request based on the identifier, and send the target video resource acquisition request to the Internet server through the collaboration server so as to request the target video resource from the Internet server.
Step 502, the video networking terminal receives a target video resource returned by the internet server, and decodes the target video resource to obtain video data and first subtitle data;
after receiving the target video resource acquisition request, the internet server can search for the corresponding target video resource according to the identifier in the target video resource acquisition request, and then send the target video resource to the protocol conversion server, and the protocol conversion server sends the target video resource to the video networking terminal.
After receiving the target video resource, the video network terminal may decode the target video resource, and extract video data and first subtitle data from the target video resource, where the first subtitle data may include a character code obtained by encoding a plurality of characters.
Step 503, when detecting that the unrecognized character code exists in the first subtitle data, the video networking terminal performs code conversion on the unrecognized character code to obtain second subtitle data;
after the first caption data is obtained, the video network terminal can match characters corresponding to character codes in the first caption data in a preset local character set, and when the fact that unrecognizable character codes exist in the first caption data is detected, the video network terminal can perform code conversion on the unrecognizable character codes to obtain second caption data.
In a preferred embodiment of the present invention, step 503 may include the following sub-steps:
substep 11, when detecting that the unrecognized character code exists in the first caption data, calling a first preset function to generate a code conversion parameter aiming at the unrecognized character code;
as an example, the first preset function may comprise an iconv open function, which may be used to initialize an internal buffer for conversion, indicating from which coding scheme to which conversion is required.
When the first caption data is detected to have the unrecognizable character codes, the video network terminal can call an interface of a first preset function to generate code conversion parameters aiming at the unrecognizable character codes, and the first preset function is called to perform code conversion, so that the problems of memory consumption, difficult maintenance, narrow application range and the like caused by self-establishment of a character set mapping table are solved.
Specifically, the sub-step 11 may include the following sub-steps:
determining a first coding mode of the character codes which cannot be identified; acquiring a second coding mode which can be identified by the video network terminal; inputting the first coding mode and the second coding mode into the first preset function; and acquiring the code conversion parameter output by the first preset function.
Because the local Character Set in the video networking terminal does not store characters for all encoding modes, when the unrecognizable Character Code is detected, the video networking terminal can determine a first encoding mode, such as GBK (Chinese Internal Code Specification, Chinese Character encoding Character Set), adopted by the unrecognizable Character Code, and then acquire a second encoding mode, such as UCS (Universal Character Set), capable of being recognized by the video networking terminal.
After obtaining the first encoding mode and the second encoding mode, the video network terminal may input the identifiers of the first encoding mode and the second encoding mode into the first preset function, and the first preset function may generate the transcoding parameters according to the first encoding mode and the second encoding mode.
And a substep 12, inputting the code conversion parameter and the unrecognized character code into a second preset function, and performing code conversion on the unrecognized character code by the second preset function according to the code conversion parameter to obtain second caption data.
As an example, the second preset function may be an iconv function, which may perform the actual transcoding, requiring two indirect buffer pointers and a remaining byte number pointer to be present.
After the transcoding parameters are obtained, the video network terminal may input the transcoding parameters and the unrecognized character codes into a second preset function, and the second preset function may convert the unrecognized character codes into character codes encoded in a second encoding manner according to the transcoding parameters, thereby obtaining second subtitle data.
Step 504, when the video network terminal plays the video data, displaying the subtitles corresponding to the second subtitle data.
After the second subtitle data is obtained, the video networking terminal may display the subtitle corresponding to the second subtitle data at the designated position when playing the video data.
In a preferred embodiment of the present invention, step 504 may include the following sub-steps:
a substep 21 of obtaining a time stamp of the currently played video data;
in a specific implementation, the video data and the subtitle data may have timestamps, and when the video data is played by the video networking terminal, the timestamp of the currently played video data may be acquired.
Substep 22, searching second caption data corresponding to the timestamp;
after the timestamp is obtained, the video network terminal may search for the same timestamp in the second subtitle data, and may obtain the second subtitle data corresponding to the timestamp.
Substep 23, matching the second caption data with the corresponding caption from a preset local character set;
after the second caption data is obtained, the video network terminal can match the character codes in the second caption data in the local character set to obtain the characters corresponding to the character codes, and organize the characters into the captions.
And a substep 24 of displaying the subtitles.
After the subtitles are obtained, the video networking terminal can synchronously display the subtitles corresponding to the currently played video data when playing the video data.
In the embodiment of the invention, when the video networking terminal detects the on-demand operation of a user on a target video resource, the video networking terminal requests the target video resource from the Internet server, then receives the target video resource returned by the Internet server, decodes the target video resource to obtain video data and first subtitle data, when the first subtitle data is detected to have unrecognizable character codes, the unrecognizable character codes are subjected to code conversion to obtain second subtitle data, and when the video data is played, subtitles corresponding to the second subtitle data are displayed, so that the on-demand of the video networking terminal on the video resource in the Internet is realized, the condition that the video networking terminal cannot recognize the subtitle data in the video resource is avoided, and the compatibility of the video networking terminal is improved.
Referring to fig. 6, a flowchart illustrating steps of a method for processing video data according to an embodiment of the present invention is shown, where the method may be applied to a video network terminal, and specifically may include the following steps:
601, when detecting the on-demand operation of a user for a target video resource, requesting the target video resource from an internet server;
in the embodiment of the invention, the video networking terminal can acquire the video resource list from the Internet server through the coordination server and display the video resource list to the user.
When the video-on-demand operation of the user aiming at the target video resource in the video resource list is detected, the video networking terminal can acquire the identifier of the target video resource, generate a target video resource acquisition request based on the identifier, and send the target video resource acquisition request to the Internet server through the collaboration server so as to request the target video resource from the Internet server.
Step 602, receiving a target video resource returned by the internet server, and decoding the target video resource to obtain video data and first subtitle data;
after receiving the target video resource acquisition request, the internet server can search for the corresponding target video resource according to the identifier in the target video resource acquisition request, and then send the target video resource to the protocol conversion server, and the protocol conversion server sends the target video resource to the video networking terminal.
After receiving the target video resource, the video network terminal may decode the target video resource, and extract video data and first subtitle data from the target video resource, where the first subtitle data may include a character code obtained by encoding a plurality of characters.
Step 603, when it is detected that unrecognized character codes exist in the first subtitle data, calling a first preset function to generate code conversion parameters for the unrecognized character codes;
when it is detected that the unrecognized character codes exist in the first subtitle data, the video networking terminal may call an interface of a first preset function to generate code conversion parameters for the unrecognized character codes.
Step 604, calling the first preset function to establish an internal buffer area; the internal buffer area is used for performing code conversion on the unrecognized character codes;
before transcoding, the terminal of the video network may call the first preset function to establish an internal buffer, so that the second preset function may use the internal buffer to transcode the unrecognized character code.
Step 605, inputting the code conversion parameter and the unrecognized character code into a second preset function, and performing code conversion on the unrecognized character code by the code conversion function according to the code conversion parameter to obtain second caption data;
after the transcoding parameters are obtained, the video network terminal may input the transcoding parameters and the unrecognized character codes into a second preset function, and the second preset function may convert the unrecognized character codes into character codes encoded in a second encoding manner according to the transcoding parameters, thereby obtaining second subtitle data.
Step 606, calling the third preset function to release the internal buffer area;
as an example, the third preset function may be an iconv _ close function, which may free the internal buffer created by the iconv _ open function.
After the code conversion, the video network terminal can call the third preset function to release the internal buffer area established by the first preset function, so as to save the memory resource of the video network terminal.
Step 607, displaying the subtitle corresponding to the second subtitle data when playing the video data.
After the second subtitle data is obtained, the video networking terminal may display the subtitle corresponding to the second subtitle data at the designated position when playing the video data.
In the embodiment of the invention, when the video networking terminal detects the on-demand operation of a user on a target video resource, the video networking terminal requests the target video resource from the Internet server, then receives the target video resource returned by the Internet server, decodes the target video resource to obtain video data and first subtitle data, when the first subtitle data is detected to have unrecognizable character codes, the unrecognizable character codes are subjected to code conversion to obtain second subtitle data, and when the video data is played, subtitles corresponding to the second subtitle data are displayed, so that the on-demand of the video networking terminal on the video resource in the Internet is realized, the condition that the video networking terminal cannot recognize the subtitle data in the video resource is avoided, and the compatibility of the video networking terminal is improved.
In addition, the internal buffer area is established aiming at the code conversion, and the internal buffer area is released after the code conversion, so that the memory resource of the video network terminal is saved, and the performance of the video network terminal is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, a block diagram of a video data processing apparatus according to an embodiment of the present invention is shown, and is applied to a video network terminal, and specifically includes the following modules:
a target video resource request module 701, configured to request a target video resource from an internet server when an on-demand operation of a user on the target video resource is detected;
a target video resource decoding module 702, configured to receive a target video resource returned by the internet server, and decode the target video resource to obtain video data and first subtitle data;
a second caption data obtaining module 703, configured to perform code conversion on the unrecognized character codes when it is detected that the unrecognized character codes exist in the first caption data, to obtain second caption data;
and a subtitle display module 704, configured to display a subtitle corresponding to the second subtitle data when the video data is played.
In a preferred embodiment of the present invention, the second subtitle data obtaining module 703 includes:
the first preset function calling sub-module is used for calling a first preset function to generate a code conversion parameter aiming at the unrecognized character code when the unrecognized character code is detected to exist in the first caption data;
and the second preset function calling sub-module is used for inputting the code conversion parameter and the unrecognized character code into a second preset function, and the second preset function carries out code conversion on the unrecognized character code according to the code conversion parameter to obtain second caption data.
In a preferred embodiment of the present invention, the first preset function call submodule includes:
a first encoding mode determining unit configured to determine a first encoding mode of the unrecognizable character encoding;
the second coding mode acquisition unit is used for acquiring a second coding mode which can be identified by the video network terminal;
a coding mode input unit, configured to input the first coding mode and the second coding mode into the first preset function;
and the code conversion parameter output unit is used for acquiring the code conversion parameter output by the first preset function.
In a preferred embodiment of the present invention, the method further comprises:
the internal buffer area establishing module is used for calling the first preset function to establish an internal buffer area; the internal buffer area is used for performing code conversion on the unrecognized character codes;
and the internal buffer area releasing module is used for calling the third preset function to release the internal buffer area.
In a preferred embodiment of the present invention, the subtitle display module 704 includes:
the time stamp obtaining submodule is used for obtaining the time stamp of the currently played video data;
the second caption data searching submodule is used for searching the second caption data corresponding to the timestamp;
the subtitle matching sub-module is used for matching the second subtitle data with the corresponding subtitle from a preset local character set;
a matched caption display sub-module for displaying the caption
In the embodiment of the invention, when the video networking terminal detects the on-demand operation of a user on a target video resource, the video networking terminal requests the target video resource from the Internet server, then receives the target video resource returned by the Internet server, decodes the target video resource to obtain video data and first subtitle data, when the first subtitle data is detected to have unrecognizable character codes, the unrecognizable character codes are subjected to code conversion to obtain second subtitle data, and when the video data is played, subtitles corresponding to the second subtitle data are displayed, so that the on-demand of the video networking terminal on the video resource in the Internet is realized, the condition that the video networking terminal cannot recognize the subtitle data in the video resource is avoided, and the compatibility of the video networking terminal is improved.
The embodiment of the invention also discloses a mobile terminal, which comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, the steps of the video data processing method are realized.
The embodiment of the present invention also discloses a computer-readable storage medium, which is characterized in that a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the method for processing video data as described above.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and apparatus for processing video data provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained herein by applying specific examples, and the description of the above embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A method for processing video data is applied to a video network terminal and comprises the following steps:
the video networking terminal requests a target video resource from an Internet server when detecting the on-demand operation of a user for the target video resource;
the video networking terminal receives a target video resource returned by the Internet server and decodes the target video resource to obtain video data and first subtitle data;
when the video network terminal detects that unrecognized character codes exist in the first caption data, performing code conversion on the unrecognized character codes to obtain second caption data;
when the video network terminal plays the video data, displaying the caption corresponding to the second caption data;
when detecting that the unrecognized character codes exist in the first caption data, the video network terminal performs code conversion on the unrecognized character codes to obtain second caption data, wherein the step of obtaining the second caption data comprises the following steps:
when the first caption data is detected to have unrecognized character codes, calling a first preset function to generate code conversion parameters aiming at the unrecognized character codes;
and inputting the code conversion parameter and the unrecognized character code into a second preset function, and performing code conversion on the unrecognized character code by the second preset function according to the code conversion parameter to obtain second caption data.
2. The method of claim 1, wherein the step of calling a first preset function to generate transcoding parameters for the unrecognized character encoding upon detecting that the unrecognized character encoding exists in the first subtitle data comprises:
determining a first coding mode of the character codes which cannot be identified;
acquiring a second coding mode which can be identified by the video network terminal;
inputting the first coding mode and the second coding mode into the first preset function;
and acquiring the code conversion parameter output by the first preset function.
3. The method according to claim 1 or 2, wherein before the step of inputting the transcoding parameter and the unrecognized character code into a second preset function, and transcoding the unrecognized character code by the second preset function according to the transcoding parameter to obtain second subtitle data, the method further comprises:
calling the first preset function to establish an internal buffer area; the internal buffer area is used for performing code conversion on the unrecognized character codes;
after the step of inputting the transcoding parameter and the unrecognized character code into a second preset function, and performing transcoding on the unrecognized character code by the second preset function according to the transcoding parameter to obtain second subtitle data, the method further includes:
and calling a third preset function to release the internal buffer area.
4. The method according to claim 1, wherein the step of displaying the subtitle corresponding to the second subtitle data when the video network terminal plays the video data comprises:
acquiring a timestamp of currently played video data;
searching second subtitle data corresponding to the timestamp;
matching the second caption data with the corresponding caption from a preset local character set;
and displaying the subtitles.
5. An apparatus for processing video data, which is applied to a video network terminal, comprises:
the target video resource request module is used for requesting the target video resource from an Internet server when the on-demand operation of a user for the target video resource is detected;
the target video resource decoding module is used for receiving the target video resource returned by the Internet server and decoding the target video resource to obtain video data and first subtitle data;
a second caption data obtaining module, configured to perform code conversion on an unrecognized character code when it is detected that the unrecognized character code exists in the first caption data, to obtain second caption data;
the caption display module is used for displaying the caption corresponding to the second caption data when the video data is played;
wherein the second subtitle data obtaining module includes:
the first preset function calling sub-module is used for calling a first preset function to generate a code conversion parameter aiming at the unrecognized character code when the unrecognized character code is detected to exist in the first caption data;
and the second preset function calling sub-module is used for inputting the code conversion parameter and the unrecognized character code into a second preset function, and the second preset function carries out code conversion on the unrecognized character code according to the code conversion parameter to obtain second caption data.
6. The apparatus of claim 5, wherein the first preset function call submodule comprises:
a first encoding mode determining unit configured to determine a first encoding mode of the unrecognizable character encoding;
the second coding mode acquisition unit is used for acquiring a second coding mode which can be identified by the video network terminal;
a coding mode input unit, configured to input the first coding mode and the second coding mode into the first preset function;
and the code conversion parameter output unit is used for acquiring the code conversion parameter output by the first preset function.
7. The apparatus of claim 5 or 6, further comprising:
the internal buffer area establishing module is used for calling the first preset function to establish an internal buffer area; the internal buffer area is used for performing code conversion on the unrecognized character codes;
and the internal buffer area releasing module is used for calling a third preset function to release the internal buffer area.
8. The apparatus of claim 5, wherein the subtitle display module comprises:
the time stamp obtaining submodule is used for obtaining the time stamp of the currently played video data;
the second caption data searching submodule is used for searching the second caption data corresponding to the timestamp;
the subtitle matching sub-module is used for matching the second subtitle data with the corresponding subtitle from a preset local character set;
and the matched subtitle display sub-module is used for displaying the subtitles.
CN201711487183.4A 2017-12-29 2017-12-29 Video data processing method and device Active CN108965930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711487183.4A CN108965930B (en) 2017-12-29 2017-12-29 Video data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711487183.4A CN108965930B (en) 2017-12-29 2017-12-29 Video data processing method and device

Publications (2)

Publication Number Publication Date
CN108965930A CN108965930A (en) 2018-12-07
CN108965930B true CN108965930B (en) 2021-05-28

Family

ID=64495750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711487183.4A Active CN108965930B (en) 2017-12-29 2017-12-29 Video data processing method and device

Country Status (1)

Country Link
CN (1) CN108965930B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859824B (en) * 2018-12-18 2022-01-14 视联动力信息技术股份有限公司 Pathological image remote display method and device
US10911791B2 (en) * 2019-01-09 2021-02-02 Netflix, Inc. Optimizing encoding operations when generating a buffer-constrained version of a media title
CN110072126A (en) * 2019-03-19 2019-07-30 视联动力信息技术股份有限公司 Data request method, association turn server and computer readable storage medium
CN112738641A (en) * 2020-12-28 2021-04-30 深圳Tcl新技术有限公司 Subtitle playing method and device, terminal equipment and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1335727A (en) * 2000-07-14 2002-02-13 汤姆森许可贸易公司 Method and apparatus for recording caption
CN103281588A (en) * 2013-05-15 2013-09-04 无锡北斗星通信息科技有限公司 Ultrahigh-definition digital television receiver adopting HEVC (High Efficiency Video Coding) for video decoding
CN103413074A (en) * 2013-07-08 2013-11-27 北京深思数盾科技有限公司 Method and device for protecting software through API
CN104156314A (en) * 2014-08-14 2014-11-19 北京航空航天大学 Code reuse method applied to test system
CN106549912A (en) * 2015-09-17 2017-03-29 北京视联动力国际信息技术有限公司 A kind of player method and system of video data
CN206451175U (en) * 2016-08-31 2017-08-29 青海民族大学 A kind of Tibetan language paper copy detection system based on Tibetan language sentence level

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930582B (en) * 2010-08-06 2013-04-10 中国工商银行股份有限公司 Multilanguage-supporting data conversion equipment and bank transaction system
CN103248951B (en) * 2013-04-28 2016-01-20 天脉聚源(北京)传媒科技有限公司 A kind of system and method adding scroll information in video
US9189207B2 (en) * 2014-03-11 2015-11-17 Telefonaktiebolaget L M Ericsson (Publ) Methods and systems for dynamic runtime generation of customized applications

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1335727A (en) * 2000-07-14 2002-02-13 汤姆森许可贸易公司 Method and apparatus for recording caption
CN103281588A (en) * 2013-05-15 2013-09-04 无锡北斗星通信息科技有限公司 Ultrahigh-definition digital television receiver adopting HEVC (High Efficiency Video Coding) for video decoding
CN106060646A (en) * 2013-05-15 2016-10-26 蔡留凤 Ultrahigh-definition digital television receiver applying subtitle processing module
CN103413074A (en) * 2013-07-08 2013-11-27 北京深思数盾科技有限公司 Method and device for protecting software through API
CN104156314A (en) * 2014-08-14 2014-11-19 北京航空航天大学 Code reuse method applied to test system
CN106549912A (en) * 2015-09-17 2017-03-29 北京视联动力国际信息技术有限公司 A kind of player method and system of video data
CN206451175U (en) * 2016-08-31 2017-08-29 青海民族大学 A kind of Tibetan language paper copy detection system based on Tibetan language sentence level

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Brightcove Inc.;Brightcove Democratizes OTT Services with Brightcove OTT Flow,Powered by Accedo》;Brightcove Inc.;《Journal of Engineering》;20160425;全文 *

Also Published As

Publication number Publication date
CN108965930A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108965224B (en) Video-on-demand method and device
CN108881815B (en) Video data transmission method and device
CN109842519B (en) Method and device for previewing video stream
CN110769310B (en) Video processing method and device based on video network
CN108965930B (en) Video data processing method and device
CN110049273B (en) Video networking-based conference recording method and transfer server
CN108574816B (en) Video networking terminal and communication method and device based on video networking terminal
CN110149305B (en) Video network-based multi-party audio and video playing method and transfer server
CN109743284B (en) Video processing method and system based on video network
CN110769179B (en) Audio and video data stream processing method and system
CN109544879B (en) Alarm data processing method and system
CN111147859A (en) Video processing method and device
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110769297A (en) Audio and video data processing method and system
CN111212255B (en) Monitoring resource obtaining method and device and computer readable storage medium
CN110134892B (en) Loading method and system of monitoring resource list
CN110086773B (en) Audio and video data processing method and system
CN109768964B (en) Audio and video display method and device
CN111447407A (en) Monitoring resource transmission method and device
CN110049069B (en) Data acquisition method and device
CN110536148B (en) Live broadcasting method and equipment based on video networking
CN109688073B (en) Data processing method and system based on video network
CN109859824B (en) Pathological image remote display method and device
CN110691214B (en) Data processing method and device for business object
CN110474934B (en) Data processing method and video networking monitoring platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100000 Dongcheng District, Beijing, Qinglong Hutong 1, 1103 house of Ge Hua building.

Applicant after: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 100000 Beijing Dongcheng District Qinglong Hutong 1 Song Hua Building A1103-1113

Applicant before: BEIJING VISIONVERA INTERNATIONAL INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant