CN114302215A - Video data stream decoding system, method, electronic device, and medium - Google Patents

Video data stream decoding system, method, electronic device, and medium Download PDF

Info

Publication number
CN114302215A
CN114302215A CN202111642351.9A CN202111642351A CN114302215A CN 114302215 A CN114302215 A CN 114302215A CN 202111642351 A CN202111642351 A CN 202111642351A CN 114302215 A CN114302215 A CN 114302215A
Authority
CN
China
Prior art keywords
data
subtitle
digital television
demultiplexer
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111642351.9A
Other languages
Chinese (zh)
Other versions
CN114302215B (en
Inventor
张侠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Guangzhou Quanshengwei Information Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Guangzhou Quanshengwei Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd, Guangzhou Quanshengwei Information Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202111642351.9A priority Critical patent/CN114302215B/en
Publication of CN114302215A publication Critical patent/CN114302215A/en
Application granted granted Critical
Publication of CN114302215B publication Critical patent/CN114302215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Provided are a video data stream decoding system, method, electronic device, and medium. The system, comprising: a demultiplexer configured to demultiplex first video stream data based on a first digital television standard from the received bit stream based on the first digital television standard; a first video decoder connected to the demultiplexer, the first video decoder configured to decode first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separate first subtitle data, and transmit the first subtitle data to the demultiplexer; wherein the demultiplexer is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard. According to the technical scheme, compatibility is improved, the software application layer resource acquisition modes are unified, and the hardware architecture becomes simple and clear.

Description

Video data stream decoding system, method, electronic device, and medium
Technical Field
The present application relates to the field of video coding and decoding, and more particularly, to video data stream decoding systems, methods, electronic devices, and non-transitory storage media.
Background
Digital Television (DTV) is a Television system that processes signals in a Digital manner from program acquisition, program production, program transmission to a user side, that is, all links from a studio to transmission, transmission and reception use Digital signals, or are transmitted through a Digital sequence consisting of 0 and 1 Digital strings. Digital television is a third generation television type following black and white analog television, color analog television, and is a concept with respect to analog television. Compared with an analog television, the digital television has higher image quality, stronger function, better sound effect and richer content and generally has interactivity and communication function.
Current digital television standards generally include three different digital television standards each formed in the united states, europe, and japan. The standard in the united states is ATSC (Advanced Television systems Committee); the european standard is DVB (Digital Video Broadcasting); the standard in japan is ISDB (Integrated Services Digital Broadcasting).
In the development of digital television specifications, DVB and ISDB have standards that individually define the specifications and formats adopted by subtitles, and developers usually obtain a Packetized Elementary Stream (PES) data stream of subtitles by allocating a Packetized Identifier (PID) filter to a demultiplexer (Demux), and then decode and render the PES data stream, and finally display the PES data stream for users to browse.
There are some exceptions to the digital television standard in which the subtitle data is not separately packetized but embedded in the video data stream. When the receiving end plays the video stream, a hardware module for decoding the video stream of the digital television standard needs to be separately designed so as to perform smooth decoding and playing of the video stream.
Accordingly, there is a need for a video stream decoding and playback solution that is compatible with various digital television standards conveniently and without adding excessive hardware modules.
Disclosure of Invention
According to an aspect of the present application, there is provided a video data stream decoding system, including: a demultiplexer configured to demultiplex first video stream data based on a first digital television standard from the received bit stream based on the first digital television standard; a first video decoder connected to the demultiplexer, the first video decoder configured to decode first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separate first subtitle data, and transmit the first subtitle data to the demultiplexer; wherein the demultiplexer is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
In one embodiment of the present application, the demultiplexer is further configured to packetize the first subtitle data into a first subtitle identifier data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
In one embodiment of the present application, the demultiplexer is configured to separate out the first subtitle data according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier, and the first audio stream data is assigned a first audio stream identifier, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
In one embodiment of the present application, the first digital television standard is a standard in which caption data is embedded in a video stream, and the second digital television standard is a standard in which caption data, video stream data, and audio stream data are individually time-division multiplexed.
In one embodiment of the present application, the first digital television standard is the American Advanced Television Systems Committee (ATSC) standard having the CEA-708 subtitle standard of the American Consumer electronics Association, and the second digital television standard is the Digital Video Broadcasting (DVB) standard or the Integrated Services Digital Broadcasting (ISDB) standard.
According to an aspect of the present application, there is provided a video data stream decoding method, including: demultiplexing first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard by a demultiplexer; decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separating first subtitle data, and transmitting the first subtitle data to the demultiplexer; and outputting the first subtitle data, the first video stream data based on the first digital television standard and the first audio stream data based on the first digital television standard by a demultiplexer.
In one embodiment of the present application, the method comprises: packing, by a demultiplexer, the first subtitle data into a first subtitle identifier data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
In an embodiment of the present application, the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard includes: separating, by a demultiplexer, the first subtitle data according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier, and the first audio stream data is assigned a first audio stream identifier, wherein the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard further includes: separating, by a demultiplexer, the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
In one embodiment of the present application, the first digital television standard is a standard in which caption data is embedded in a video stream, and the second digital television standard is a standard in which caption data, video stream data, and audio stream data are individually time-division multiplexed.
In one embodiment of the present application, the first digital television standard is the American Advanced Television Systems Committee (ATSC) standard having the CEA-708 subtitle standard of the Consumer electronics Association, and the second digital television standard is the Digital Video Broadcasting (DVB) standard or the Integrated Services Digital Broadcasting (ISDB) standard.
According to an aspect of the present application, there is provided an electronic device including: a memory to store instructions; a processor to read the instructions in the memory and to perform the methods of the various embodiments of the present application.
According to an aspect of the present application, there is provided a non-transitory storage medium having instructions stored thereon, wherein the instructions, when read by a processor, cause the processor to perform the methods of the various embodiments of the present application.
According to the technical scheme, compatibility is improved, the software application layer resource acquisition modes are unified, and the hardware architecture becomes simple and clear.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 shows a block diagram of a receiver of the DVB digital television standard.
Fig. 2 shows the transmission structure of digital video, Program Map Table (PMT), Event Information Table (EIT), audio and other data and synchronization information in digital tv code stream under CEA-708 standard.
Fig. 3 shows a block diagram of a receiver of the CEA-708 digital television standard.
Fig. 4 shows a block diagram of a video data stream decoding system according to an embodiment of the present application.
Fig. 5 shows an exemplary diagram of an application of a decoding system according to the embodiment of fig. 4.
Fig. 6 shows a schematic diagram of the transport stream format of mpeg-2 as a bitstream of a digital television DTV, i.e. the transport stream format of the first digital television standard.
Fig. 7 shows a schematic diagram of a thumbnail version of the packetized elementary stream PES data format with the partial format omitted.
Fig. 8 shows a schematic diagram of a thumbnail version of the elementary stream ES data format with a partial format omitted.
Fig. 9 shows a schematic diagram of a data format of closed caption CC data.
Fig. 10 shows a flow chart of a method of decoding a video data stream according to an embodiment of the present application.
FIG. 11 illustrates a block diagram of an exemplary computer system suitable for use in implementing embodiments of the present application.
Fig. 12 shows a schematic diagram of a non-transitory computer-readable storage medium according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that they are not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or functional arrangement, and that any functional block or functional arrangement may be implemented as a physical entity or a logical entity, or a combination of both.
In order that those skilled in the art will better understand the present invention, the following detailed description of the invention is provided in conjunction with the accompanying drawings and the detailed description of the invention.
Note that the example to be described next is only a specific example, and is not intended as a limitation on the embodiments of the present invention, and specific shapes, hardware, connections, steps, numerical values, conditions, data, orders, and the like, are necessarily shown and described. Those skilled in the art can, upon reading this specification, utilize the concepts of the present invention to construct more embodiments than those specifically described herein.
DVB and ISDB individually define the specification and format of a subtitle, and a developer usually obtains a Packetized Elementary Stream (PES) data stream of the subtitle by allocating a PID (packet identifier) filter to a demultiplexer (Demux), and then decodes and renders the PES data stream to be finally displayed for a user to browse. That is, at the transmitting end, the subtitle data of the video is separated from the video data and the audio data and is time-division multiplexed, so that at the receiving end, the code stream is demultiplexed, and the separated subtitle data, video data and audio data are separated. This receiver hardware is compatible with more digital television standards.
Fig. 1 shows a block diagram of a receiver 100 of the DVB digital television standard. The receiver 100 includes a tuner 101 for receiving a radio frequency signal and performing frequency conversion, filtering, and automatic gain control functions; a demodulator 102, configured to demodulate data output by the tuner 101; and a demultiplexer 103 for demultiplexing the data output from the demodulator 102 and decomposing the data into individual DVB-audio data, DVB-video data and DVB-subtitle data by respective PID identifiers.
It can be seen that for such digital television streams, the DVB-subtitle data originates directly from the demultiplexer 103 via its own packet identifier PID.
However, with some exceptions to digital television standards, such as the consumer electronics association/electronics association (CEA/EIA) -708 standard adopted in the ATSC standard, the subtitle data is not separately packetized, but embedded in the video data stream, commonly referred to as ATSC Closed Caption (CC). The closed caption CC is called "caption for hearing impaired people" because it describes all sounds and dialogs in video by words or symbols, especially sounds such as "tap the door", "stream sound", etc., which do not exist in general captions (subtitles) of DVB, ISDB, which describe dialogs only by words.
Closed caption CC data may be transmitted by 9 channels: the odd field comprises 4 channels, CC1, CC2, TEXT1, TEXT 2; the even field includes 5 channels, CC3, CC4, TEXT3, TEXT4, XDS (Extended Data services). The CC1, the CC2, the CC3 and the CC4 can be used for transmitting characters in different languages, the content is mainly the dialogues of the characters in the images, and the corresponding characters can be displayed near the mouth of the speaker when in use; TEXT1, TEXT2, TEXT3, TEXT4 are mainly used for transmitting some information, such as weather forecast, news, etc.; XDS is generally used to transmit time information, tv network information, a name of a current tv program, etc., and transmits data mainly for V-CHIP (program rating) use. Closed captioning CC mainly follows two standards: CEA-708 and EIA-708(CEA-708) standards.
The Data stream of the ATSC closed caption resource is not carried by a single packet identifier PID Data stream, but is carried in moving Picture experts group-2 (MPEG-2) Picture User Data (MPEG-2Picture User Data). As shown in fig. 2, fig. 2 shows the transmission structure of digital video, Program Map Table (PMT), Event Information Table (EIT), audio and other data and synchronization information in digital tv code stream under CEA-708 standard. The bit stream of the digital television comprises audio data, video data and control data, wherein the control data is responsible for controlling the playing of the audio data and the video data. As can be seen in fig. 1, digital television closed caption (DTVCC) service data, including caption text, window instructions, etc., is encapsulated in Picture User data (Picture User Date) in video data.
For such closed caption video stream, when performing video stream playing at the decoder of the receiving end, a special hardware module for decoding the video stream of such digital television standard needs to be separately designed for smooth video stream decoding and playing, as shown in fig. 3. Fig. 3 shows a block diagram of a receiver 300 of the CEA-708 digital television standard. The receiver 300 includes a tuner 101 for receiving a radio frequency signal and performing frequency conversion, filtering, and automatic gain control; a demodulator 102, configured to demodulate data output by the tuner 101; a demultiplexer 303, configured to demultiplex data output by the demodulator 102, and decompose the data into individual ATSC-audio data and ATSC-video data; and a video decoder 304 for re-decoding the ATSC-video data and separating the ATSC-closed caption CC data.
It can be seen that such extraction of closed caption data is a special data stream acquisition by a special channel (e.g., additional video decoder 304). However, this makes the software application layer resource acquisition mode non-uniform, and the hardware architecture is more complex.
According to the method and the device, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
Fig. 4 shows a block diagram of a video data stream decoding system according to an embodiment of the present application.
As shown in fig. 4, the video data stream decoding system 400 includes: a demultiplexer 401 configured to demultiplex first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; a first video decoder 402 connected to the demultiplexer 401, the first video decoder 402 configured to decode the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separate first subtitle data, and transmit the first subtitle data to the demultiplexer 401; wherein the demultiplexer 401 is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
Fig. 5 shows an exemplary diagram of an application of a decoding system according to the embodiment of fig. 4.
As shown in fig. 5, the tuner 101 is used for receiving a radio frequency signal and is responsible for frequency conversion, filtering, and automatic gain control. The demodulator 102 is configured to demodulate data output from the tuner 101 to obtain a bitstream according to the first digital television standard. The demultiplexer 401 is configured to demultiplex first video stream data based on the first digital television standard and first audio stream data based on the first digital television standard from the received bit stream based on the first digital television standard. The first video decoder 402 is configured to decode the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separate out the first subtitle data, and transmit the first subtitle data to the demultiplexer 401. The demultiplexer 401 outputs first subtitle data, first video stream data based on a first digital television standard, and first audio stream data based on the first digital television standard.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and function of the conventional tuner, demodulator, and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of acquiring and processing the special data stream specifically for ATSC-closed caption CC data and the hardware module shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
Of course, the process of demultiplexing the code stream based on the second digital television standard by the demultiplexer 401 is also shown in fig. 5. Here, the second digital television standard is different from the first digital television standard, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are individually time-division multiplexed, such as most digital television standards: the DVB standard for digital video broadcasting, or the ISDB standard for integrated services digital broadcasting.
The demultiplexer 401 may identify the second audio stream data, the second video stream data, and the second subtitle data based on the second digital television standard from the code stream based on the second digital television standard through respective packetization identifiers PID in a conventional manner, and separate the second audio stream data, the second video stream data, and the second subtitle data. That is, the second audio stream data, the second video stream data, and the second subtitle data based on the second digital television standard are respectively set with corresponding packetization identifiers PID so that the demultiplexer 401 knows how to separate them.
For example, the second audio stream data is set with a packetization identifier PID of 1111, the second video stream data is set with a packetization identifier PID of 2222, and the second subtitle data is set with a packetization identifier PID of 3333, respectively. Accordingly, the demultiplexer 401 finds a stream having a PID of 1111 to identify as the second audio stream data, finds a stream having a PID of 2222 to identify as the second video stream data, and finds a stream having a PID of 3333 to identify as the second subtitle data.
Next, a specific process of separating the first subtitle data from the first video decoder 402 and packaging the first subtitle data by the demultiplexer will be described in detail.
Fig. 6 shows a schematic diagram of the transport stream format of mpeg-2 as a bitstream of a digital television DTV, i.e. the transport stream format of the first digital television standard.
The transport stream format includes a packet identifier PID, which is a bit string of 13 bits in length. For an audio stream, a video stream, and a control stream, different packet identifiers PID are set, respectively. Accordingly, the demultiplexer 401 may separate the first video stream data and the first audio stream data converted into the packetized elementary stream PES data format and the first control stream data from the transport stream (specifically, data _ byte in the transport stream format) of the mpeg-2 according to respectively different packetizing identifiers PID.
Fig. 7 shows a schematic diagram of a thumbnail version of the packetized elementary stream PES data format with the partial format omitted.
The first video decoder 402 acquires elementary-stream ES video data from the first video stream data in the PES data format as shown in fig. 7. Fig. 8 shows a schematic diagram of a thumbnail version of the elementary stream ES data format with a partial format omitted.
The first video decoder 402 obtains the user data where the first subtitle data is located from the elementary stream ES video data, and finally converts the user data into the first subtitle data, for example, closed caption CC data. Fig. 9 shows a schematic diagram of a data format of closed caption CC data. Here, the first subtitle data obtained by the first video decoder 402 is not in a standard transport stream format.
In order for the demultiplexer 401 to still separate the first subtitle data, e.g., the closed caption CC data, by the pack identifier PID, the demultiplexer 401 is further configured to pack the first subtitle data into a first subtitle identifier PID data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of the second subtitle data based on the second digital television standard.
Here, in one embodiment, the demultiplexer 401 may distinguish the first subtitle data from the second subtitle data based on the second digital television standard in such a way that it is known to pack the first subtitle data, because the other second subtitle data is already packed data and does not need to be packed by the demultiplexer 401. Therefore, since each packetization identifier for a bitstream based on the second digital television standard is a 13-bit binary number, i.e., a range is 0 to 8191 (i.e., 0 to 0x1FFF), the first video decoder 402 can add a parameter, e.g., a parameter greater than 8191, e.g., 8192, to the first subtitle data, and when the demultiplexer 401 receives the 8192-added first subtitle data, it is not treated as the second subtitle data having a range of 0 to 8191, but rather the first subtitle data is packetized by the first subtitle identifier PID. Of course, this is not necessary, and the demultiplexer 401 may also determine that the data stream has no PID when receiving the first subtitle data obtained by the first video decoder 402, and directly packetize the data stream by using the PID.
As described above in connection with fig. 7-9, how to obtain the first subtitle data from the packetized elementary stream PES, the demultiplexer 40 may pack the obtained first subtitle data from the data to the elementary stream ES to the packetized elementary stream PES as the first subtitle identifier PID data stream using the first subtitle identifier in reverse. The detailed packing process is not described herein in detail.
Here, the first caption identifier may be distinguished from each packetization identifier originally used by the demultiplexer 401 to separate the code stream based on the second digital television standard, so that the demultiplexer 401 may distinguish the first caption data of the first digital television standard different from the second digital television standard. For example, if the respective packetization identifiers for the codestream based on the second digital television standard are some 13-bit binary digits, i.e., ranging from 0 to 8191 (i.e., 0 to 0x1FFF), e.g., 2222, 3333, then the first caption identifier may be set to other numbers than these, e.g., greater than 4444, and may be separated as normal caption data. Of course, the setting of the first subtitle identifier is not limited thereto as long as the demultiplexer 401 is enabled to correctly distinguish the first subtitle data of the first digital television standard different from the second digital television standard. Of course, if in the above embodiment, the demultiplexer 401 can directly pack the data stream with a PID upon receiving the first subtitle data obtained by the first video decoder 402, the PID here can be directly set to a PID different from the conventional PID, for example, 8192.
Of course, the above PID and packetizing processes are examples only for changing the interface parameters of the demultiplexer less, but are not limited thereto, and in fact, the demultiplexer may also achieve the purpose of packetizing the subtitle data decoded by the first video decoder by other means, which is not necessarily developed here.
Then, the demultiplexer 401 is configured to separate the first subtitle data from the first subtitle identifier PID data stream according to the first subtitle identifier after receiving the first subtitle identifier PID data stream transmitted from the first video decoder 402.
As before, the first video stream data is also assigned a first video stream identifier PID and the first audio stream data is assigned a first audio stream identifier PID, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data, respectively, based on the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively.
For example, the first video stream identifier PID is 4567, the first audio stream identifier PID is 6789, and the first subtitle identifier is 8192. As such, separating the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively, includes: a code stream having PID of 4567 is found to be identified as the first video stream data, a code stream having PID of 6789 is found to be identified as the first audio stream data, and a code stream having PID of 8192 is found to be identified as the first subtitle data.
Here, it can be seen that the demultiplexer 401 has the same hardware structure and function as the demultiplexer of most digital television standards, and without extensive modifications to the demultiplexer 401, the hardware structure and function thereof can be compatible with a first digital television standard different from most digital television standards, such as the CEA-708 standard of the consumer electronics association of the ATSC in the united states.
In summary, the first video decoder 402 sends the separated first caption data to the demultiplexer 401 in a ruminal manner, and re-encapsulates the separated first caption data into a PID data stream of a specific first digital television standard, so that the entire system acquires the first digital television standard, such as ATSC-closed caption CC data, in a standard and unified manner.
It can be seen that only one first video decoder 402 is added, without changing the hardware structure and function of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of special data stream acquisition and processing and the like specially for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
Fig. 10 shows a flow chart of a method of decoding a video data stream according to an embodiment of the present application.
The video data stream decoding method 1000 shown in fig. 10 includes: step 1001, demultiplexing first video stream data based on a first digital television standard from a received bit stream based on the first digital television standard by a demultiplexer; step 1002, decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from a demultiplexer and separating first subtitle data, and sending the first subtitle data to the demultiplexer; step 1003, outputting the first subtitle data, the first video stream data based on the first digital television standard and the first audio stream data based on the first digital television standard by a demultiplexer.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and function of the conventional tuner, demodulator, and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of acquiring and processing the special data stream specifically for ATSC-closed caption CC data and the hardware module shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
In one embodiment of the present application, step 1002 may comprise packetizing, by the demultiplexer, the first subtitle data into a first subtitle identifier data stream with the first subtitle identifier. Wherein the first caption identifier is different from a second caption identifier of second caption data based on a second digital television standard.
In one embodiment of the present application, step 1003 may include: the first subtitle data is separated by a demultiplexer according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier, and the first audio stream data is assigned a first audio stream identifier, wherein step 1003 may include: and separating the first subtitle data, the first video stream data and the first audio stream data respectively according to the first subtitle identifier, the first video stream identifier and the first audio stream identifier by a demultiplexer.
In one embodiment of the present application, the first digital television standard may be a standard in which subtitle data is embedded in a video stream, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are individually time-division multiplexed.
In one embodiment of the present application, the first digital television standard may be the advanced television systems committee for america ATSC with the consumer electronics association CEA-708 subtitle standard, and the second digital television standard may be the digital video broadcasting DVB standard, or the integrated services digital broadcasting ISDB standard.
In summary, the first video decoder sends the separated first caption data to the demultiplexer in a ruminal manner, and encapsulates the separated first caption data into a PID data stream of the specific first digital television standard again, so that the entire system acquires the first digital television standard, such as ATSC-closed caption CC data, in a standard unified manner.
It can be seen that only one first video decoder is added, without changing the hardware structure and function of the traditional tuner, demodulator and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of special data stream acquisition and processing and the like specially aiming at the ATSC-closed caption CC data and the hardware module as shown in fig. 3 are also not needed, so that the compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
FIG. 11 illustrates a block diagram of an exemplary computer system suitable for use in implementing embodiments of the present application.
The computer system may include a processor (H1); a memory (H2) coupled to the processor (H1) and having stored therein computer-executable instructions for performing, when executed by the processor, the steps of the respective methods of embodiments of the present application.
The processor (H1) may include, but is not limited to, for example, one or more processors or microprocessors or the like.
The memory (H2) may include, but is not limited to, for example, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, computer storage media (e.g., hard disk, floppy disk, solid state disk, removable disk, CD-ROM, DVD-ROM, Blu-ray disk, and the like).
In addition, the computer system may include a data bus (H3), an input/output (I/O) bus (H4), a display (H5), and an input/output device (H6) (e.g., a keyboard, a mouse, a speaker, etc.), among others.
The processor (H1) may communicate with external devices (H5, H6, etc.) via a wired or wireless network (not shown) over an I/O bus (H4).
The memory (H2) may also store at least one computer-executable instruction for performing, when executed by the processor (H1), the functions and/or steps of the methods in the embodiments described in the present technology.
In one embodiment, the at least one computer-executable instruction may also be compiled or combined into a software product, where the one or more computer-executable instructions, when executed by the processor, perform the functions and/or steps of the method in the embodiments described in the present technology.
Fig. 12 shows a schematic diagram of a non-transitory computer-readable storage medium according to an embodiment of the present disclosure.
As shown in FIG. 12, computer-readable storage media 1220 has instructions stored thereon, such as computer-readable instructions 1210. The computer readable instructions 1210, when executed by a processor, may perform the various methods described with reference to the above. Computer-readable storage media include, but are not limited to, volatile memory and/or nonvolatile memory, for example. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. For example, the computer-readable storage medium 1220 may be connected to a computing device, such as a computer, and then the various methods described above may be performed with the computing device executing the computer-readable instructions 1210 stored on the computer-readable storage medium 1220.
Of course, the above-mentioned embodiments are merely examples and not limitations, and those skilled in the art can combine and combine some steps and apparatuses from the above-mentioned separately described embodiments to achieve the effects of the present invention according to the concepts of the present invention, and such combined and combined embodiments are also included in the present invention, and such combined and combined embodiments are not necessarily described herein.
It is noted that advantages, effects, and the like, which are mentioned in the present disclosure, are only examples and not limitations, and they are not to be considered essential to various embodiments of the present invention. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the invention is not limited to the specific details described above.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The flowchart of steps in the present disclosure and the above description of methods are merely illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by those skilled in the art, the order of the steps in the above embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are only used to guide the reader through the description of these methods. Furthermore, any reference to an element in the singular, for example, using the articles "a," "an," or "the" is not to be construed as limiting the element to the singular.
In addition, the steps and devices in the embodiments are not limited to be implemented in a certain embodiment, and in fact, some steps and devices in the embodiments may be combined according to the concept of the present invention to conceive new embodiments, and these new embodiments are also included in the scope of the present invention.
The individual operations of the methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software components and/or modules including, but not limited to, a hardware circuit, an Application Specific Integrated Circuit (ASIC), or a processor.
The various illustrative logical blocks, modules, and circuits described may be implemented or described with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that may be used include Random Access Memory (RAM), Read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, and the like. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
The methods disclosed herein comprise one or more acts for implementing the described methods. The methods and/or acts may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and/or use of specific actions may be modified without departing from the scope of the claims.
The above-described functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. As used herein, disk (disk) and disc (disc) includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Accordingly, a computer program product may perform the operations presented herein. For example, such a computer program product may be a computer-readable tangible medium having instructions stored (and/or encoded) thereon that are executable by one or more processors to perform the operations described herein. The computer program product may include packaged material.
Software or instructions may also be transmitted over a transmission medium. For example, the software may be transmitted from a website, server, or other remote source using a transmission medium such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, or microwave.
Further, modules and/or other suitable means for carrying out the methods and techniques described herein may be downloaded and/or otherwise obtained by a user terminal and/or base station as appropriate. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk) so that the user terminal and/or base station can obtain the various methods when coupled to or providing storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
Other examples and implementations are within the scope and spirit of the disclosure and the following claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hard-wired, or any combination of these. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that a list of "A, B or at least one of C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the invention to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A video data stream decoding system, comprising:
a demultiplexer configured to demultiplex first video stream data based on a first digital television standard from the received bit stream based on the first digital television standard;
a first video decoder connected to the demultiplexer, the first video decoder configured to decode first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separate first subtitle data, and transmit the first subtitle data to the demultiplexer;
wherein the demultiplexer is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
2. The system of claim 1, wherein the demultiplexer is further configured to packetize the first subtitle data into a first subtitle identifier data stream with a first subtitle identifier that is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
3. The system of claim 2, wherein the demultiplexer is configured to separate out the first subtitle data according to the first subtitle identifier.
4. The system of claim 2, wherein the first video stream data is assigned a first video stream identifier and the first audio stream data is assigned a first audio stream identifier, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data based on the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
5. The system of claim 1, wherein the first digital television standard is a standard for embedding subtitle data in a video stream, and the second digital television standard is a standard for time-division multiplexing subtitle data, video stream data, and audio stream data separately.
6. A method of decoding a video data stream, comprising:
demultiplexing first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard by a demultiplexer;
decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from the demultiplexer and separating first subtitle data, and transmitting the first subtitle data to the demultiplexer;
and outputting the first subtitle data, the first video stream data based on the first digital television standard and the first audio stream data based on the first digital television standard by a demultiplexer.
7. The method of claim 7, wherein the method comprises:
packing, by a demultiplexer, the first subtitle data into a first subtitle identifier data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
8. The method of claim 8, wherein the outputting, by a demultiplexer, the first subtitle data, the first video stream data based on a first digital television standard, and the first audio stream data based on a first digital television standard comprises:
separating, by a demultiplexer, the first subtitle data according to the first subtitle identifier.
9. The method of claim 8, wherein the first video stream data is assigned a first video stream identifier and the first audio stream data is assigned a first audio stream identifier, wherein the outputting, by the demultiplexer, the first subtitle data, the first video stream data based on the first digital television standard, the first audio stream data based on the first digital television standard further comprises:
separating, by a demultiplexer, the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
10. The method of claim 7, wherein the first digital television standard is a standard for embedding subtitle data in a video stream, and the second digital television standard is a standard for time-division multiplexing subtitle data, video stream data, and audio stream data separately.
11. An electronic device, comprising:
a memory to store instructions;
a processor for reading the instructions in the memory and performing the method of any one of claims 7-12.
12. A non-transitory storage medium having instructions stored thereon,
wherein the instructions, when read by a processor, cause the processor to perform the method of any of claims 7-12.
CN202111642351.9A 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium Active CN114302215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111642351.9A CN114302215B (en) 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111642351.9A CN114302215B (en) 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium

Publications (2)

Publication Number Publication Date
CN114302215A true CN114302215A (en) 2022-04-08
CN114302215B CN114302215B (en) 2023-09-29

Family

ID=80971264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111642351.9A Active CN114302215B (en) 2021-12-29 2021-12-29 Video data stream decoding system, method, electronic device and medium

Country Status (1)

Country Link
CN (1) CN114302215B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090214178A1 (en) * 2005-07-01 2009-08-27 Kuniaki Takahashi Reproduction Apparatus, Video Decoding Apparatus, and Synchronized Reproduction Method
CN101860699A (en) * 2003-09-17 2010-10-13 Lg电子株式会社 Digital broadcasting transmitter and method for processing caption thereof
TW201038065A (en) * 2009-04-14 2010-10-16 Mediatek Singapore Pte Ltd Method for processing a subtitle data stream of a video program and associated video display system
CN101894578A (en) * 2002-10-11 2010-11-24 汤姆森许可贸易公司 The method and apparatus of synchronous data flow
KR20120107897A (en) * 2012-08-16 2012-10-04 엘지전자 주식회사 Method of transmitting a digital broadcast signal
CN103248927A (en) * 2013-05-15 2013-08-14 无锡北斗星通信息科技有限公司 MIMO (multiple-input multiple-output)-type DVB-T(Digital Video Broadcasting-Terrestrial) set top box with caption processing function
CN103281495A (en) * 2013-05-14 2013-09-04 无锡北斗星通信息科技有限公司 Digital television receiver compatible with DVB (Digital Video Broadcasting) and ATSC (Advanced Television Systems Committee) standards
CN203327190U (en) * 2013-02-28 2013-12-04 青岛海尔电子有限公司 Television program caption processing system and broadcast system
CN104137555A (en) * 2012-03-21 2014-11-05 索尼公司 Non-closed caption data transport in standard caption service
CN104780416A (en) * 2015-03-18 2015-07-15 福建新大陆通信科技股份有限公司 A set top box subtitle display system
WO2015134878A1 (en) * 2014-03-07 2015-09-11 Thomson Licensing Simultaneous subtitle closed caption system
CN104917983A (en) * 2015-05-29 2015-09-16 北京时代奥视科技股份有限公司 Device, system and method for processing hiding subtitles in digital video signals
US20160173812A1 (en) * 2013-09-03 2016-06-16 Lg Electronics Inc. Apparatus for transmitting broadcast signals, apparatus for receiving broadcast signals, method for transmitting broadcast signals and method for receiving broadcast signals
CN105791957A (en) * 2013-05-15 2016-07-20 孔涛 Ultra-high-definition digital television receiver using HEVC (high efficiency video coding)
CN107211170A (en) * 2015-02-20 2017-09-26 索尼公司 Transmitting device, transmission method, reception device and method of reseptance
WO2017164551A1 (en) * 2016-03-22 2017-09-28 엘지전자 주식회사 Broadcast signal transmission and reception method and device
CN107864393A (en) * 2017-11-17 2018-03-30 青岛海信电器股份有限公司 The method and device that video is shown with captioning synchronization
CN109218758A (en) * 2018-11-19 2019-01-15 珠海迈科智能科技股份有限公司 A kind of trans-coding system that supporting CC caption function and method
CN109963092A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 A kind of processing method of subtitle, device and terminal
CN111276170A (en) * 2014-08-07 2020-06-12 松下电器(美国)知识产权公司 Decoding system and decoding method
CN112055262A (en) * 2020-08-11 2020-12-08 视若飞信息科技(上海)有限公司 Method and system for displaying network streaming media subtitles
CN112055253A (en) * 2020-08-14 2020-12-08 央视国际视频通讯有限公司 Method and device for adding and multiplexing independent subtitle stream
CN112672099A (en) * 2020-12-31 2021-04-16 深圳市潮流网络技术有限公司 Subtitle data generation and presentation method, device, computing equipment and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894578A (en) * 2002-10-11 2010-11-24 汤姆森许可贸易公司 The method and apparatus of synchronous data flow
CN101860699A (en) * 2003-09-17 2010-10-13 Lg电子株式会社 Digital broadcasting transmitter and method for processing caption thereof
US20090214178A1 (en) * 2005-07-01 2009-08-27 Kuniaki Takahashi Reproduction Apparatus, Video Decoding Apparatus, and Synchronized Reproduction Method
TW201038065A (en) * 2009-04-14 2010-10-16 Mediatek Singapore Pte Ltd Method for processing a subtitle data stream of a video program and associated video display system
CN104137555A (en) * 2012-03-21 2014-11-05 索尼公司 Non-closed caption data transport in standard caption service
KR20120107897A (en) * 2012-08-16 2012-10-04 엘지전자 주식회사 Method of transmitting a digital broadcast signal
CN203327190U (en) * 2013-02-28 2013-12-04 青岛海尔电子有限公司 Television program caption processing system and broadcast system
CN103281495A (en) * 2013-05-14 2013-09-04 无锡北斗星通信息科技有限公司 Digital television receiver compatible with DVB (Digital Video Broadcasting) and ATSC (Advanced Television Systems Committee) standards
CN103248927A (en) * 2013-05-15 2013-08-14 无锡北斗星通信息科技有限公司 MIMO (multiple-input multiple-output)-type DVB-T(Digital Video Broadcasting-Terrestrial) set top box with caption processing function
CN105791957A (en) * 2013-05-15 2016-07-20 孔涛 Ultra-high-definition digital television receiver using HEVC (high efficiency video coding)
US20160173812A1 (en) * 2013-09-03 2016-06-16 Lg Electronics Inc. Apparatus for transmitting broadcast signals, apparatus for receiving broadcast signals, method for transmitting broadcast signals and method for receiving broadcast signals
WO2015134878A1 (en) * 2014-03-07 2015-09-11 Thomson Licensing Simultaneous subtitle closed caption system
CN111276170A (en) * 2014-08-07 2020-06-12 松下电器(美国)知识产权公司 Decoding system and decoding method
CN107211170A (en) * 2015-02-20 2017-09-26 索尼公司 Transmitting device, transmission method, reception device and method of reseptance
CN104780416A (en) * 2015-03-18 2015-07-15 福建新大陆通信科技股份有限公司 A set top box subtitle display system
CN104917983A (en) * 2015-05-29 2015-09-16 北京时代奥视科技股份有限公司 Device, system and method for processing hiding subtitles in digital video signals
WO2017164551A1 (en) * 2016-03-22 2017-09-28 엘지전자 주식회사 Broadcast signal transmission and reception method and device
CN107864393A (en) * 2017-11-17 2018-03-30 青岛海信电器股份有限公司 The method and device that video is shown with captioning synchronization
CN109963092A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 A kind of processing method of subtitle, device and terminal
CN109218758A (en) * 2018-11-19 2019-01-15 珠海迈科智能科技股份有限公司 A kind of trans-coding system that supporting CC caption function and method
CN112055262A (en) * 2020-08-11 2020-12-08 视若飞信息科技(上海)有限公司 Method and system for displaying network streaming media subtitles
CN112055253A (en) * 2020-08-14 2020-12-08 央视国际视频通讯有限公司 Method and device for adding and multiplexing independent subtitle stream
CN112672099A (en) * 2020-12-31 2021-04-16 深圳市潮流网络技术有限公司 Subtitle data generation and presentation method, device, computing equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吉爱国;衣祝松;: "基于国芯6102芯片的DVB字幕解码器设计", 福建电脑, no. 01 *
罗威;: "利用分离字幕技术解决国际版电视节目对白字幕制作问题", 现代电视技术, no. 05 *

Also Published As

Publication number Publication date
CN114302215B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
KR102101826B1 (en) Non-closed caption data transport in standard caption service
KR101408485B1 (en) Method and apparatus for encoding metadata into a digital program stream
US20110149153A1 (en) Apparatus and method for dtv closed-captioning processing in broadcasting and communication system
US20130176387A1 (en) Digital receiver and method for processing 3d contents in digital receiver
KR20160102479A (en) Method and apparatus for transceiving broadcast signal
US20130209063A1 (en) Digital receiver and content processing method in digital receiver
KR101486354B1 (en) Broadcast receiver and method for processing broadcast data
KR102189520B1 (en) Apparatus and method for transmitting and receiving broadcasting
JP2015005917A (en) Information transmission apparatus, information transmission method, and information reception apparatus
KR20140102083A (en) Digital broadcast receiver and method for updating channel information
JP6929998B2 (en) Receiver
US9596450B2 (en) Video transmission device, video transmission method, and video playback device
CN114302215B (en) Video data stream decoding system, method, electronic device and medium
CN101083732A (en) Digital television receiver and method for processing broadcast signal
CN106060646A (en) Ultrahigh-definition digital television receiver applying subtitle processing module
JP6707479B2 (en) Broadcast signal transmitter
KR101325802B1 (en) Digital Broadcasting Transmitter, Digital Broadcasting Receiver and System and Method for Serving Digital Broadcasting
KR20080054181A (en) An apparatus and a method for receiving broadcast
JP2020036338A (en) Broadcast signal receiver unit
JP7242775B2 (en) receiver
JP2020010294A (en) Reception method of broadcast signal reception device
JP6742955B2 (en) Television receiver and receiving method
KR100525404B1 (en) Method for watching restriction of Digital broadcast
US8824858B2 (en) Information processing apparatus and information processing method
JP6707705B2 (en) Broadcast signal receiver

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 101, 1st Floor, Building 3, Yard 18, Kechuang 14th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 101102

Applicant after: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant after: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 101102 No. 2179, floor 2, building D, building 33, No. 99, Kechuang 14th Street, Beijing Economic and Technological Development Zone (centralized office area)

Applicant before: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant before: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 101102 Room 101, 1/F, Building 3, No. 18 Courtyard, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant after: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 101, 1st Floor, Building 3, Yard 18, Kechuang 14th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 101102

Applicant before: Beijing yisiwei Computing Technology Co.,Ltd.

Applicant before: GUANGZHOU QUANSHENGWEI INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant