CN114302215B - Video data stream decoding system, method, electronic device and medium - Google Patents

Video data stream decoding system, method, electronic device and medium Download PDF

Info

Publication number
CN114302215B
CN114302215B CN202111642351.9A CN202111642351A CN114302215B CN 114302215 B CN114302215 B CN 114302215B CN 202111642351 A CN202111642351 A CN 202111642351A CN 114302215 B CN114302215 B CN 114302215B
Authority
CN
China
Prior art keywords
data
subtitle
digital television
demultiplexer
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111642351.9A
Other languages
Chinese (zh)
Other versions
CN114302215A (en
Inventor
张侠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Guangzhou Quanshengwei Information Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Guangzhou Quanshengwei Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd, Guangzhou Quanshengwei Information Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202111642351.9A priority Critical patent/CN114302215B/en
Publication of CN114302215A publication Critical patent/CN114302215A/en
Application granted granted Critical
Publication of CN114302215B publication Critical patent/CN114302215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Video data stream decoding systems, methods, electronic devices, and media are provided. The system comprises: a demultiplexer configured to demultiplex first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; a first video decoder connected to the demultiplexer, the first video decoder configured to decode first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separate first subtitle data, and transmit the first subtitle data to the demultiplexer; wherein the demultiplexer is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard. The technical scheme of the application increases compatibility, unifies the acquisition modes of the software application layer resources, and simplifies and clearly improves the hardware architecture.

Description

Video data stream decoding system, method, electronic device and medium
Technical Field
The present application relates to the field of video encoding and decoding, and more particularly, to a video data stream decoding system, method, electronic device, and non-transitory storage medium.
Background
The digital television (Digital Television) is a television system for digitally processing signals from program acquisition, program production, program transmission to the user side, i.e. all links from studio to transmission, transmission and reception use digital signals, or are propagated by a digital sequence consisting of 0, 1 digital strings. Digital television is a third generation television type following black-and-white analog television and color analog television, and is a concept relative to analog television. Compared with analog television, digital television has higher image quality, stronger function, better sound effect, richer content and usually has interactivity and communication function.
Current digital television standards generally include three different digital television standards formed in the united states, europe, and japan, respectively. The standard in the united states is ATSC (Advanced Television System Committee, advanced television systems committee); the standard in europe is DVB (Digital Video Broadcasting ); the standard of japan is ISDB (Integrated Services Digital Broadcasting ).
In the development of digital television specifications, DVB and ISDB have standards that individually define specifications and formats adopted by subtitles, and developers generally obtain a packetized elementary stream (packetized elementary stream, PES) data stream of the subtitles by allocating a packetized identifier (packetized identifier, PID) filter to a demultiplexer (Demux), and then decode and render the data stream to finally display the data stream for users to browse.
There are exceptions to the digital television standard where the subtitle data is not individually packetized but embedded in the video data stream. When the receiving end plays the video stream, a hardware module for decoding the video stream of the digital television standard needs to be designed separately so as to decode and play the video stream smoothly.
Therefore, a video stream decoding and playing technical scheme capable of being compatible with various digital television standards conveniently and without adding too many hardware modules is needed.
Disclosure of Invention
According to one aspect of the present application, there is provided a video data stream decoding system comprising: a demultiplexer configured to demultiplex first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; a first video decoder connected to the demultiplexer, the first video decoder configured to decode first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separate first subtitle data, and transmit the first subtitle data to the demultiplexer; wherein the demultiplexer is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
In one embodiment of the application, the demultiplexer is further configured to package the first subtitle data into a first subtitle identifier data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
In one embodiment of the application, the demultiplexer is configured to separate the first subtitle data according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier, and the first audio stream data is assigned a first audio stream identifier, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
In one embodiment of the present application, the first digital television standard is a standard in which subtitle data is embedded in a video stream, and the second digital television standard is a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed.
In one embodiment of the present application, the first digital television standard is the American advanced television System Committee ATSC standard with the American Consumer electronics Association CEA-708 caption standard and the second digital television standard is the digital video broadcasting DVB standard, or the Integrated services digital broadcasting ISDB standard.
According to an aspect of the present application, there is provided a video data stream decoding method comprising: demultiplexing, by a demultiplexer, first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separating first subtitle data, and transmitting the first subtitle data to the demultiplexer; outputting, by a demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
In one embodiment of the application, the method comprises: the first subtitle data is packetized by a demultiplexer with a first subtitle identifier into a first subtitle identifier data stream, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
In one embodiment of the present application, the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard includes: the first subtitle data is separated by a demultiplexer according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier, and the first audio stream data is assigned a first audio stream identifier, wherein the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard further includes: the first subtitle data, the first video stream data, and the first audio stream data are separated by a demultiplexer according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
In one embodiment of the present application, the first digital television standard is a standard in which subtitle data is embedded in a video stream, and the second digital television standard is a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed.
In one embodiment of the present application, the first digital television standard is the American advanced television System Committee ATSC standard with Consumer electronics Association CEA-708 caption standard and the second digital television standard is the digital video broadcast DVB standard, or the Integrated services digital broadcast ISDB standard.
According to an aspect of the present application, there is provided an electronic apparatus including: a memory for storing instructions; and a processor for reading the instructions in the memory and performing the methods of the various embodiments of the present application.
According to one aspect of the present application there is provided a non-transitory storage medium having instructions stored thereon, wherein the instructions, when read by a processor, cause the processor to perform the method of the various embodiments of the present application.
The technical scheme of the application increases compatibility, unifies the acquisition modes of the software application layer resources, and simplifies and clearly improves the hardware architecture.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 shows a block diagram of a receiver of the DVB digital television standard.
Fig. 2 shows the transmission structure of digital video, program Map Table (PMT), event Information Table (EIT), audio and other data and synchronization information in a digital television bitstream under CEA-708 standard.
FIG. 3 shows a block diagram of a receiver of the CEA-708 digital television standard.
Fig. 4 shows a block diagram of a video data stream decoding system according to an embodiment of the application.
Fig. 5 shows an exemplary diagram of an application of the decoding system according to the embodiment of fig. 4.
Fig. 6 shows a schematic diagram of the transport stream format of mpeg-2 as a bit stream of a digital television DTV, i.e. the transport stream format of the first digital television standard.
Fig. 7 shows a schematic diagram of a thumbnail version of the packetized elementary stream PES data format with part of the format omitted.
Fig. 8 shows a schematic diagram of a thumbnail version of the elementary stream ES data format with a partial format omitted.
Fig. 9 shows a schematic diagram of a data format of closed caption CC data.
Fig. 10 shows a flowchart of a video data stream decoding method according to an embodiment of the present application.
FIG. 11 illustrates a block diagram of an exemplary computer system suitable for use in implementing embodiments of the present application.
Fig. 12 shows a schematic diagram of a non-transitory computer-readable storage medium according to an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or arrangement of functions, and any functional block or arrangement of functions may be implemented as a physical entity or a logical entity, or a combination of both.
The present invention will be described in further detail below with reference to the drawings and detailed description for the purpose of enabling those skilled in the art to understand the invention better.
Note that the example to be described next is only one specific example, and not as limiting the embodiments of the present invention must be for specific shapes, hardware, connection relations, steps, values, conditions, data, sequences, etc. shown and described. Those skilled in the art can, upon reading the present specification, make and use the concepts of the invention to construct further embodiments not mentioned in the specification.
DVB, ISDB define separately the specification and format adopted by the subtitle, the developer usually adopts the way of distributing the packing identifier (packetized identifier, PID) filter to the demultiplexer (Demux), obtain the packing elementary stream (packetized elementary stream, PES) data stream of the subtitle, then through decoding, rendering, finally display for users to browse. That is, at the transmitting end, the caption data of the video is separated from the video data and the audio data, and is time-division multiplexed, and then at the receiving end, the code stream is demultiplexed to separate the separate caption data, the video data and the audio data. The set of receiver hardware is compatible with more digital television standards.
Fig. 1 shows a block diagram of a receiver 100 of the DVB digital television standard. The receiver 100 includes a tuner 101 for receiving radio frequency signals and is responsible for frequency conversion, filtering, automatic gain control, etc.; a demodulator 102 for demodulating the data output from the tuner 101; and a demultiplexer 103 for demultiplexing the data outputted from the demodulator 102 into individual DVB-audio data, DVB-video data, DVB-subtitle data by means of respective packet identifiers PID.
It can be seen that for such digital television streams, the DVB-subtitle data is directly derived from the demultiplexer 103 by means of its own packet identifier PID.
But for some exceptional digital television standards, such as the consumer electronics association/electronics association (CEA/EIA) -708 standard adopted in the ATSC standard, the subtitle data is not individually packetized, but is embedded in the video data stream, commonly referred to as ATSC Closed Captioning (CC). The closed caption CC is called "caption for people with hearing impairment", because the closed caption CC describes all sounds and dialogs in video by words or symbols, especially sounds, such as "knocks", "waterflow sounds", etc., which are not present in the general captions (subtitles) of DVB, ISDB, which only describe dialogs by words.
Closed caption CC data may be transmitted by 9 channels: the odd field includes 4 channels, CC1, CC2, TEXT1, TEXT2; the even field includes 5 channels, CC3, CC4, TEXT3, TEXT4, XDS (Extended Data services ). CC1, CC2, CC3, CC4 can be used for transmitting different language characters, the content is mainly the dialect of the person in the image, can display the corresponding characters near the mouth of the speaker when using; TEXT1, TEXT2, TEXT3, TEXT4 are mainly used for transmitting some information, such as weather forecast, news, etc.; XDS is generally used for transmitting time information, television network information, names of current television programs, etc., and data transmitted by the XDS is mainly used for V-CHIP (program rating). The closed caption CC mainly follows two standards: CEA-708 and EIA-708 (CEA-708) standards.
The data stream of the ATSC closed caption resource is not transmitted by a separate one of the packet identifier PID data streams, but is transmitted in moving picture experts group-2 (MPEG-2) image user data (MPEG-2 Picture User Data). As shown in FIG. 2, FIG. 2 shows the transmission structure of digital video, program Map Table (PMT), event Information Table (EIT), audio and other data and synchronization information in a digital television bitstream under the CEA-708 standard. The bit stream of the digital television comprises audio data, video data and control data, wherein the control data is responsible for controlling the playing of the audio data and the video data. As can be seen in fig. 1, digital television closed caption (DTVCC) service data, image user data (Picture User Date) including caption text, window instructions, etc. are encapsulated in video data.
Then, for such a closed caption video stream, when the video stream is played at the decoder of the receiving end, a special hardware module for decoding the video stream of such a digital television standard needs to be separately designed for smooth video stream decoding and playing, as shown in fig. 3. FIG. 3 shows a block diagram of a receiver 300 of the CEA-708 digital television standard. The receiver 300 includes a tuner 101 for receiving radio frequency signals and is responsible for frequency conversion, filtering, automatic gain control, etc.; a demodulator 102 for demodulating the data output from the tuner 101; a demultiplexer 303 for demultiplexing the data output from the demodulator 102 into separate ATSC-audio data and ATSC-video data; and a video decoder 304 for re-decoding the ATSC-video data and separating the ATSC-closed caption CC data.
It can be seen that such closed caption data extraction is a special data stream acquisition through a special channel (e.g., the additional video decoder 304). However, this makes the resource acquisition mode of the software application layer non-uniform, and the hardware architecture is complex.
The application hopes to unify the resource acquisition modes of the software application layer, and the hardware architecture becomes simple and clear.
Fig. 4 shows a block diagram of a video data stream decoding system according to an embodiment of the application.
As shown in fig. 4, the video data stream decoding system 400 includes: a demultiplexer 401 configured to demultiplex first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; a first video decoder 402 connected to the demultiplexer 401, the first video decoder 402 being configured to decode the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separate out first subtitle data, and to transmit the first subtitle data to the demultiplexer 401; wherein the demultiplexer 401 is configured to output first subtitle data, first video stream data based on a first digital television standard, first audio stream data based on the first digital television standard.
Fig. 5 shows an exemplary diagram of an application of the decoding system according to the embodiment of fig. 4.
As shown in fig. 5, the tuner 101 is configured to receive radio frequency signals and is responsible for frequency conversion, filtering, automatic gain control, and other functions. The demodulator 102 is configured to demodulate the data output from the tuner 101 to obtain a bit stream based on the first digital television standard. The demultiplexer 401 is configured to demultiplex the first video stream data based on the first digital television standard and the first audio stream data based on the first digital television standard from the received bit stream based on the first digital television standard. The first video decoder 402 is for decoding the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separating out first subtitle data, and transmitting the first subtitle data to the demultiplexer 401. The demultiplexer 401 outputs first subtitle data, first video stream data based on the first digital television standard, and first audio stream data based on the first digital television standard.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the consumer electronics association CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not needed, so that compatibility is increased, and the resource acquisition mode of the software application layer is unified, and the hardware architecture becomes simple and clear.
Of course, the process of demultiplexing the code stream based on the second digital television standard by the demultiplexer 401 is also shown in fig. 5. Here, the second digital television standard is different from the first digital television standard, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed, for example, most digital television standards: the digital video broadcasting DVB standard, or the integrated services digital broadcasting ISDB standard.
The demultiplexer 401 may identify the second audio stream data, the second video stream data, the second subtitle data based on the second digital television standard from the code stream based on the second digital television standard by the respective packet identifier PID in a conventional manner and separate the individual second audio stream data, the second video stream data, the second subtitle data. That is, the second audio stream data, the second video stream data, and the second subtitle data based on the second digital television standard are respectively set with the corresponding package identifiers PID so that the demultiplexer 401 knows how to separate them.
For example, the second audio stream data is set with a packetizing identifier PID of 1111, the second video stream data is set with a packetizing identifier PID of 2222, and the second subtitle data is respectively set with a packetizing identifier PID of 3333. Accordingly, the demultiplexer 401 finds a code stream having a PID of 1111 to identify as second audio stream data, a code stream having a PID of 2222 to identify as second video stream data, and a code stream having a PID of 3333 to identify as second subtitle data.
Next, a specific process of separating the first subtitle data from the first video decoder 402 and the demultiplexer encapsulating the first subtitle data will be described in detail.
Fig. 6 shows a schematic diagram of the transport stream format of mpeg-2 as a bit stream of a digital television DTV, i.e. the transport stream format of the first digital television standard.
The transport stream format includes a packet identifier PID, which is a bit string of 13 bits in length. Different packet identifiers PID are set for the audio stream, the video stream and the control stream, respectively. Accordingly, the demultiplexer 401 can separate the first video stream data and the first audio stream data converted into the packetized elementary stream PES data format and the first control stream data from the transport stream (specifically, data_byte in the transport stream format) of mpeg-2 according to the respective different packetization identifiers PID.
Fig. 7 shows a schematic diagram of a thumbnail version of the packetized elementary stream PES data format with part of the format omitted.
The first video decoder 402 acquires elementary stream ES video data from the first video stream data in the PES data format shown in fig. 7. Fig. 8 shows a schematic diagram of a thumbnail version of the elementary stream ES data format with a partial format omitted.
The first video decoder 402 acquires user data where the first subtitle data is located from the elementary stream ES video data, and finally converts the user data into first subtitle data, for example, closed caption CC data. Fig. 9 shows a schematic diagram of a data format of closed caption CC data. Here, the first subtitle data obtained by the first video decoder 402 is not in a standard transport stream format.
In order for the demultiplexer 401 to still separate the first subtitle data, e.g. the closed caption CC data, by the packing identifier PID, the demultiplexer 401 is further configured to pack the first subtitle data with a first subtitle identifier into a first subtitle identifier PID data stream, wherein the first subtitle identifier is different from a second subtitle identifier of the second subtitle data based on the second digital television standard.
Here, in one embodiment, the demultiplexer 401 may distinguish the first subtitle data from the second subtitle data based on the second digital television standard in such a way that it is known to be packetized because the other second subtitle data is already packetized data, and the demultiplexer 401 is not required to packetize again. Thus, since the respective packing identifiers for the code streams based on the second digital television standard are 13-bit binary digits, i.e., range 0-8191 (i.e., 0 to 0x1 FFF), the first video decoder 402 may add a parameter to the first subtitle data, for example, a parameter greater than 8191, for example, 8192, and when the demultiplexer 401 receives the first subtitle data to which 8192 is added, it does not treat it as the second subtitle data having a range of 0-8191, but also packets the first subtitle data with the first subtitle identifier PID. Of course, this is not necessary, and the demultiplexer 401 may directly packetize the data stream with a PID if it determines that the data stream does not have the PID when receiving the first subtitle data obtained by the first video decoder 402.
How to acquire the first subtitle data from the packetized elementary stream PES is described above in connection with fig. 7 to 9, the demultiplexer 40 may packetize the acquired first subtitle data into a first subtitle identifier PID data stream with the first subtitle identifier reversing the process from data to elementary stream ES to packetized elementary stream PES. The detailed packing process is not described here in detail.
Here, the first caption identifier may be distinguished from the respective package identifiers that the demultiplexer 401 originally uses to separate the code stream based on the second digital television standard, so that the demultiplexer 401 may distinguish the first caption data of the first digital television standard different from the second digital television standard. For example, if the respective packet identifiers for the second digital television standard based code stream are some 13-bit binary digits, i.e., some of the ranges 0-8191 (i.e., 0 to 0x1 FFF), such as 2222, 3333, then the first caption identifier may be set to digits other than these digits, such as greater than 4444, and may be separated as normal caption data. Of course, the setting of the first subtitle identifier is not limited thereto as long as the demultiplexer 401 is enabled to correctly distinguish the first subtitle data of the first digital television standard different from the second digital television standard. Of course, if in the above-described embodiment, the demultiplexer 401 may directly packetize the data stream with a PID upon receiving the first subtitle data obtained by the first video decoder 402, where the PID may be directly set to a PID different from the conventional PID, for example 8192.
Of course, the PID and the packetizing process are both examples, only for less changing the interface parameters of the demultiplexer, but not limited thereto, and in fact, the demultiplexer may be implemented in other ways to perform the purpose of packetizing the subtitle data decoded by the first video decoder, which is not expanded one by one.
Then, the demultiplexer 401 is configured to separate the first subtitle data from the first subtitle identifier PID data stream according to the first subtitle identifier after receiving the first subtitle identifier PID data stream transmitted from the first video decoder 402.
As before, the first video stream data is also assigned a first video stream identifier PID and the first audio stream data is assigned a first audio stream identifier PID, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively.
For example, the first video stream identifier PID is 4567, the first audio stream identifier PID is 6789, and the first subtitle identifier is 8192. As such, separating the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively, includes: the code stream having the PID of 4567 is found to identify as the first video stream data, the code stream having the PID of 6789 is found to identify as the first audio stream data, and the code stream having the PID of 8192 is found to identify as the first subtitle data.
Here, it can be seen that the hardware structure and function of the demultiplexer 401 are the same as those of most digital television standards, and the hardware structure and function of the demultiplexer 401 are compatible with a first digital television standard different from the most digital television standards, for example, the CEA-708 standard of the consumer electronics association of ATSC.
In summary, the first video decoder 402 ruminant returns the separated first subtitle data to the demultiplexer 401 and repackages the separated first subtitle data into a PID data stream of a specific first digital television standard, so that the entire system acquires the first digital television standard, e.g., ATSC-closed caption CC data, in a standard unified manner.
It can be seen that, only one first video decoder 402 is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not needed, so that compatibility is increased, and the resource acquisition mode of the software application layer is unified, and the hardware architecture becomes simple and clear.
Fig. 10 shows a flowchart of a video data stream decoding method according to an embodiment of the present application.
The video data stream decoding method 1000 shown in fig. 10 includes: step 1001, demultiplexing, by a demultiplexer, first video stream data based on a first digital television standard from a received bit stream based on the first digital television standard; step 1002, decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from a demultiplexer and separating out first subtitle data, and transmitting the first subtitle data to the demultiplexer; in step 1003, the demultiplexer outputs the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the consumer electronics association CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not needed, so that compatibility is increased, and the resource acquisition mode of the software application layer is unified, and the hardware architecture becomes simple and clear.
In one embodiment of the present application, step 1002 may include packaging, by a demultiplexer, the first subtitle data with the first subtitle identifier into a first subtitle identifier data stream. Wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
In one embodiment of the present application, step 1003 may include: the first subtitle data is separated by a demultiplexer according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier and the first audio stream data is assigned a first audio stream identifier, wherein step 1003 may include: the first subtitle data, the first video stream data, and the first audio stream data are separated by the demultiplexer according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
In one embodiment of the present application, the first digital television standard may be a standard in which subtitle data is embedded in a video stream, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed.
In one embodiment of the application, the first digital television standard may be the American advanced television System committee ATSC with Consumer electronics Association CEA-708 caption standard, and the second digital television standard may be the digital video broadcast DVB standard, or the Integrated services digital broadcast ISDB standard.
In summary, the separated first subtitle data is ruminally transmitted back to the demultiplexer by the first video decoder and repackaged into a PID data stream of a specific first digital television standard, so that the entire system acquires the first digital television standard, for example, ATSC-closed caption CC data, in a standard unified manner.
It can be seen that only one first video decoder is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
FIG. 11 illustrates a block diagram of an exemplary computer system suitable for use in implementing embodiments of the present application.
The computer system may include a processor (H1); a memory (H2) coupled to the processor (H1) and having stored therein computer executable instructions for performing the steps of the methods of the embodiments of the present application when executed by the processor.
The processor (H1) may include, but is not limited to, for example, one or more processors or microprocessors or the like.
The memory (H2) may include, for example, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a computer storage medium (e.g., hard disk, a floppy disk, a solid state disk, a removable disk, a CD-ROM, a DVD-ROM, a blu-ray disc, etc.).
In addition, the computer system may include a data bus (H3), an input/output (I/O) bus (H4), a display (H5), and an input/output device (H6) (e.g., keyboard, mouse, speaker, etc.), etc.
The processor (H1) may communicate with external devices (H5, H6, etc.) via a wired or wireless network (not shown) through an I/O bus (H4).
The memory (H2) may also store at least one computer executable instruction for performing the functions and/or steps of the methods in the embodiments described in the present technology when executed by the processor (H1).
In one embodiment, the at least one computer-executable instruction may also be compiled or otherwise formed into a software product in which one or more computer-executable instructions, when executed by a processor, perform the functions and/or steps of the methods described in the embodiments of the technology.
Fig. 12 shows a schematic diagram of a non-transitory computer-readable storage medium according to an embodiment of the disclosure.
As shown in fig. 12, the computer-readable storage medium 1220 has instructions stored thereon, such as computer-readable instructions 1210. When executed by a processor, the computer-readable instructions 1210 may perform the various methods described with reference to the above. Computer-readable storage media include, but are not limited to, volatile memory and/or nonvolatile memory, for example. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. For example, the computer-readable storage medium 1220 may be connected to a computing device such as a computer, and then the various methods described above may be performed where the computing device runs the computer-readable instructions 1210 stored on the computer-readable storage medium 1220.
Of course, the above-described specific embodiments are merely examples, and those skilled in the art may combine and combine some steps and means from the above-described embodiments separately to achieve the effects of the present invention according to the concept of the present invention, and such combined and combined embodiments are also included in the present invention, and such combination and combination are not described herein one by one.
Note that advantages, effects, and the like mentioned in this disclosure are merely examples and are not to be construed as necessarily essential to the various embodiments of the invention. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the invention is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The step flow diagrams in this disclosure and the above method descriptions are merely illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. The order of steps in the above embodiments may be performed in any order, as will be appreciated by those skilled in the art. Words such as "thereafter," "then," "next," and the like are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of these methods. Furthermore, any reference to an element in the singular, for example, using the articles "a," "an," or "the," is not to be construed as limiting the element to the singular.
In addition, the steps and means in the various embodiments herein are not limited to practice in a certain embodiment, and indeed, some of the steps and some of the means associated with the various embodiments herein may be combined according to the concepts of the present invention to contemplate new embodiments, which are also included within the scope of the present invention.
The individual operations of the above-described method may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software components and/or modules including, but not limited to, circuitry for hardware, an Application Specific Integrated Circuit (ASIC), or a processor.
The various illustrative logical blocks, modules, and circuits described herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that may be used include Random Access Memory (RAM), read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, and so forth. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across several storage media.
The methods disclosed herein include one or more acts for implementing the described methods. The methods and/or acts may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of acts is specified, the order and/or use of specific acts may be modified without departing from the scope of the claims.
The functions described above may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. As used herein, discs (disks) and disks include Compact Disks (CDs), laser disks, optical disks, digital Versatile Disks (DVDs), floppy disks, and blu-ray disks where disks usually reproduce data magnetically, while disks reproduce data optically with lasers.
Thus, the computer program product may perform the operations presented herein. For example, such a computer program product may be a computer-readable tangible medium having instructions tangibly stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. The computer program product may comprise packaged material.
The software or instructions may also be transmitted over a transmission medium. For example, software may be transmitted from a website, server, or other remote source using a transmission medium such as a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, or microwave.
Furthermore, modules and/or other suitable means for performing the methods and techniques described herein may be downloaded and/or otherwise obtained by the user terminal and/or base station as appropriate. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein may be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk, etc.) so that the user terminal and/or base station can obtain the various methods when coupled to or providing storage means to the device. Further, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
Other examples and implementations are within the scope and spirit of the disclosure and the appended claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hardwired or any combination of these. Features that implement the functions may also be physically located at various locations including being distributed such that portions of the functions are implemented at different physical locations. Also, as used herein, including in the claims, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C), for example. Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
Various changes, substitutions, and alterations are possible to the techniques described herein without departing from the techniques of the teachings, as defined by the appended claims. Furthermore, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the invention to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A video data stream decoding system comprising:
a demultiplexer configured to demultiplex first video stream data based on a first digital television standard from a received bit stream based on the first digital television standard, the first digital television standard being a standard in which subtitle data is embedded in a video stream;
a first video decoder connected to the demultiplexer, the first video decoder configured to decode first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separate first subtitle data, and transmit the first subtitle data to the demultiplexer;
Wherein the demultiplexer is configured to output the first subtitle data, the first video stream data based on the first digital television standard, the first audio stream data based on the first digital television standard,
wherein the demultiplexer is further configured to package the first subtitle data into a first subtitle identifier data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
2. The system of claim 1, wherein the demultiplexer is configured to separate the first subtitle data according to the first subtitle identifier.
3. The system of claim 1, wherein the first video stream data is assigned a first video stream identifier and the first audio stream data is assigned a first audio stream identifier, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
4. The system of claim 1, wherein the second digital television standard is a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed.
5. A method of decoding a video data stream, comprising:
demultiplexing, by a demultiplexer, first video stream data based on a first digital television standard, which is a standard in which subtitle data is embedded in a video stream, from a received bit stream based on the first digital television standard;
decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separating first subtitle data, and transmitting the first subtitle data to the demultiplexer;
outputting, by a demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard;
the first subtitle data is packetized by a demultiplexer with a first subtitle identifier into a first subtitle identifier data stream, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
6. The method of claim 5, wherein the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, the first audio stream data based on the first digital television standard comprises:
The first subtitle data is separated by a demultiplexer according to the first subtitle identifier.
7. The method of claim 5, wherein the first video stream data is assigned a first video stream identifier and the first audio stream data is assigned a first audio stream identifier, wherein the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, the first audio stream data based on the first digital television standard further comprises:
the first subtitle data, the first video stream data, and the first audio stream data are separated by a demultiplexer according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
8. The method of claim 5, wherein the second digital television standard is a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed.
9. An electronic device, comprising:
a memory for storing instructions;
a processor for reading instructions in said memory and performing the method of any of claims 5-8.
10. A non-transitory storage medium having instructions stored thereon,
Wherein the instructions, when read by a processor, cause the processor to perform the method of any of claims 5-8.
CN202111642351.9A 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium Active CN114302215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111642351.9A CN114302215B (en) 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111642351.9A CN114302215B (en) 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium

Publications (2)

Publication Number Publication Date
CN114302215A CN114302215A (en) 2022-04-08
CN114302215B true CN114302215B (en) 2023-09-29

Family

ID=80971264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111642351.9A Active CN114302215B (en) 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium

Country Status (1)

Country Link
CN (1) CN114302215B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860699A (en) * 2003-09-17 2010-10-13 Lg电子株式会社 Digital broadcasting transmitter and method for processing caption thereof
TW201038065A (en) * 2009-04-14 2010-10-16 Mediatek Singapore Pte Ltd Method for processing a subtitle data stream of a video program and associated video display system
CN101894578A (en) * 2002-10-11 2010-11-24 汤姆森许可贸易公司 The method and apparatus of synchronous data flow
KR20120107897A (en) * 2012-08-16 2012-10-04 엘지전자 주식회사 Method of transmitting a digital broadcast signal
CN103248927A (en) * 2013-05-15 2013-08-14 无锡北斗星通信息科技有限公司 MIMO (multiple-input multiple-output)-type DVB-T(Digital Video Broadcasting-Terrestrial) set top box with caption processing function
CN103281495A (en) * 2013-05-14 2013-09-04 无锡北斗星通信息科技有限公司 Digital television receiver compatible with DVB (Digital Video Broadcasting) and ATSC (Advanced Television Systems Committee) standards
CN203327190U (en) * 2013-02-28 2013-12-04 青岛海尔电子有限公司 Television program caption processing system and broadcast system
CN104137555A (en) * 2012-03-21 2014-11-05 索尼公司 Non-closed caption data transport in standard caption service
CN104780416A (en) * 2015-03-18 2015-07-15 福建新大陆通信科技股份有限公司 A set top box subtitle display system
WO2015134878A1 (en) * 2014-03-07 2015-09-11 Thomson Licensing Simultaneous subtitle closed caption system
CN104917983A (en) * 2015-05-29 2015-09-16 北京时代奥视科技股份有限公司 Device, system and method for processing hiding subtitles in digital video signals
CN105791957A (en) * 2013-05-15 2016-07-20 孔涛 Ultra-high-definition digital television receiver using HEVC (high efficiency video coding)
CN107211170A (en) * 2015-02-20 2017-09-26 索尼公司 Transmitting device, transmission method, reception device and method of reseptance
WO2017164551A1 (en) * 2016-03-22 2017-09-28 엘지전자 주식회사 Broadcast signal transmission and reception method and device
CN107864393A (en) * 2017-11-17 2018-03-30 青岛海信电器股份有限公司 The method and device that video is shown with captioning synchronization
CN109218758A (en) * 2018-11-19 2019-01-15 珠海迈科智能科技股份有限公司 A kind of trans-coding system that supporting CC caption function and method
CN109963092A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 A kind of processing method of subtitle, device and terminal
CN111276170A (en) * 2014-08-07 2020-06-12 松下电器(美国)知识产权公司 Decoding system and decoding method
CN112055253A (en) * 2020-08-14 2020-12-08 央视国际视频通讯有限公司 Method and device for adding and multiplexing independent subtitle stream
CN112055262A (en) * 2020-08-11 2020-12-08 视若飞信息科技(上海)有限公司 Method and system for displaying network streaming media subtitles
CN112672099A (en) * 2020-12-31 2021-04-16 深圳市潮流网络技术有限公司 Subtitle data generation and presentation method, device, computing equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4311570B2 (en) * 2005-07-01 2009-08-12 株式会社ソニー・コンピュータエンタテインメント Playback apparatus, video decoding apparatus, and synchronous playback method
CA2918738A1 (en) * 2013-09-03 2015-03-12 Lg Electronics Inc. Apparatus for transmitting broadcast signals, apparatus for receiving broadcast signals, method for transmitting broadcast signals and method for receiving broadcast signals

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894578A (en) * 2002-10-11 2010-11-24 汤姆森许可贸易公司 The method and apparatus of synchronous data flow
CN101860699A (en) * 2003-09-17 2010-10-13 Lg电子株式会社 Digital broadcasting transmitter and method for processing caption thereof
TW201038065A (en) * 2009-04-14 2010-10-16 Mediatek Singapore Pte Ltd Method for processing a subtitle data stream of a video program and associated video display system
CN104137555A (en) * 2012-03-21 2014-11-05 索尼公司 Non-closed caption data transport in standard caption service
KR20120107897A (en) * 2012-08-16 2012-10-04 엘지전자 주식회사 Method of transmitting a digital broadcast signal
CN203327190U (en) * 2013-02-28 2013-12-04 青岛海尔电子有限公司 Television program caption processing system and broadcast system
CN103281495A (en) * 2013-05-14 2013-09-04 无锡北斗星通信息科技有限公司 Digital television receiver compatible with DVB (Digital Video Broadcasting) and ATSC (Advanced Television Systems Committee) standards
CN103248927A (en) * 2013-05-15 2013-08-14 无锡北斗星通信息科技有限公司 MIMO (multiple-input multiple-output)-type DVB-T(Digital Video Broadcasting-Terrestrial) set top box with caption processing function
CN105791957A (en) * 2013-05-15 2016-07-20 孔涛 Ultra-high-definition digital television receiver using HEVC (high efficiency video coding)
WO2015134878A1 (en) * 2014-03-07 2015-09-11 Thomson Licensing Simultaneous subtitle closed caption system
CN111276170A (en) * 2014-08-07 2020-06-12 松下电器(美国)知识产权公司 Decoding system and decoding method
CN107211170A (en) * 2015-02-20 2017-09-26 索尼公司 Transmitting device, transmission method, reception device and method of reseptance
CN104780416A (en) * 2015-03-18 2015-07-15 福建新大陆通信科技股份有限公司 A set top box subtitle display system
CN104917983A (en) * 2015-05-29 2015-09-16 北京时代奥视科技股份有限公司 Device, system and method for processing hiding subtitles in digital video signals
WO2017164551A1 (en) * 2016-03-22 2017-09-28 엘지전자 주식회사 Broadcast signal transmission and reception method and device
CN107864393A (en) * 2017-11-17 2018-03-30 青岛海信电器股份有限公司 The method and device that video is shown with captioning synchronization
CN109963092A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 A kind of processing method of subtitle, device and terminal
CN109218758A (en) * 2018-11-19 2019-01-15 珠海迈科智能科技股份有限公司 A kind of trans-coding system that supporting CC caption function and method
CN112055262A (en) * 2020-08-11 2020-12-08 视若飞信息科技(上海)有限公司 Method and system for displaying network streaming media subtitles
CN112055253A (en) * 2020-08-14 2020-12-08 央视国际视频通讯有限公司 Method and device for adding and multiplexing independent subtitle stream
CN112672099A (en) * 2020-12-31 2021-04-16 深圳市潮流网络技术有限公司 Subtitle data generation and presentation method, device, computing equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
利用分离字幕技术解决国际版电视节目对白字幕制作问题;罗威;;现代电视技术(第05期);全文 *
基于国芯6102芯片的DVB字幕解码器设计;吉爱国;衣祝松;;福建电脑(第01期);全文 *

Also Published As

Publication number Publication date
CN114302215A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
KR100552678B1 (en) Apparauts and method for transmitting and receiving with reducing the setup time of data packet
KR101408485B1 (en) Method and apparatus for encoding metadata into a digital program stream
KR101204513B1 (en) Digital multimedia reproduction apparatus and method for providing digital multimedia broadcasting thereof
US20110149153A1 (en) Apparatus and method for dtv closed-captioning processing in broadcasting and communication system
JP2006217636A (en) Method and apparatus of providing and receiving video service in digital audio broadcasting
US20130176387A1 (en) Digital receiver and method for processing 3d contents in digital receiver
KR101486354B1 (en) Broadcast receiver and method for processing broadcast data
US20130209063A1 (en) Digital receiver and content processing method in digital receiver
CN109417648B (en) Receiving apparatus and receiving method
CN114302215B (en) Video data stream decoding system, method, electronic device and medium
US20130235865A1 (en) Apparatus and method for transmitting data in broadcasting system
KR20040084508A (en) Apparatus and Its Method of Multiplexing Multimedia Data to DAB Data
KR20160106069A (en) Method and apparatus for reproducing multimedia data
CN106060646A (en) Ultrahigh-definition digital television receiver applying subtitle processing module
US9319736B2 (en) Apparatus and method for editing TS program information and TS recording device using the same
KR20080054181A (en) An apparatus and a method for receiving broadcast
KR100525404B1 (en) Method for watching restriction of Digital broadcast
CN105791957A (en) Ultra-high-definition digital television receiver using HEVC (high efficiency video coding)
KR100725928B1 (en) DMB Receiving Terminal Apparatus and Method for high-speed decoding of broadcasting contents
US20100296794A1 (en) Information processing apparatus and information processing method
KR20080073435A (en) Digital broadcasting transmitter, digital broadcasting receiver and system and method for serving digital broadcasting
KR100513795B1 (en) Transmitting/receiving apparatus and its method for providing synchronized event service using system time clock in digital data broadcasting system
KR101314619B1 (en) A playing method and a playing apparatus for multimedia stream
KR980013417A (en) Method and apparatus for transmitting audio data
JP4431633B2 (en) Receiving apparatus and receiving method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 101, 1st Floor, Building 3, Yard 18, Kechuang 14th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 101102

Applicant after: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant after: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 101102 No. 2179, floor 2, building D, building 33, No. 99, Kechuang 14th Street, Beijing Economic and Technological Development Zone (centralized office area)

Applicant before: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant before: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 101102 Room 101, 1/F, Building 3, No. 18 Courtyard, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant after: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 101, 1st Floor, Building 3, Yard 18, Kechuang 14th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 101102

Applicant before: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant before: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant