Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or arrangement of functions, and any functional block or arrangement of functions may be implemented as a physical entity or a logical entity, or a combination of both.
The present invention will be described in further detail below with reference to the drawings and detailed description for the purpose of enabling those skilled in the art to understand the invention better.
Note that the example to be described next is only one specific example, and not as limiting the embodiments of the present invention must be for specific shapes, hardware, connection relations, steps, values, conditions, data, sequences, etc. shown and described. Those skilled in the art can, upon reading the present specification, make and use the concepts of the invention to construct further embodiments not mentioned in the specification.
DVB, ISDB define separately the specification and format adopted by the subtitle, the developer usually adopts the way of distributing the packing identifier (packetized identifier, PID) filter to the demultiplexer (Demux), obtain the packing elementary stream (packetized elementary stream, PES) data stream of the subtitle, then through decoding, rendering, finally display for users to browse. That is, at the transmitting end, the caption data of the video is separated from the video data and the audio data, and is time-division multiplexed, and then at the receiving end, the code stream is demultiplexed to separate the separate caption data, the video data and the audio data. The set of receiver hardware is compatible with more digital television standards.
Fig. 1 shows a block diagram of a receiver 100 of the DVB digital television standard. The receiver 100 includes a tuner 101 for receiving radio frequency signals and is responsible for frequency conversion, filtering, automatic gain control, etc.; a demodulator 102 for demodulating the data output from the tuner 101; and a demultiplexer 103 for demultiplexing the data outputted from the demodulator 102 into individual DVB-audio data, DVB-video data, DVB-subtitle data by means of respective packet identifiers PID.
It can be seen that for such digital television streams, the DVB-subtitle data is directly derived from the demultiplexer 103 by means of its own packet identifier PID.
But for some exceptional digital television standards, such as the consumer electronics association/electronics association (CEA/EIA) -708 standard adopted in the ATSC standard, the subtitle data is not individually packetized, but is embedded in the video data stream, commonly referred to as ATSC Closed Captioning (CC). The closed caption CC is called "caption for people with hearing impairment", because the closed caption CC describes all sounds and dialogs in video by words or symbols, especially sounds, such as "knocks", "waterflow sounds", etc., which are not present in the general captions (subtitles) of DVB, ISDB, which only describe dialogs by words.
Closed caption CC data may be transmitted by 9 channels: the odd field includes 4 channels, CC1, CC2, TEXT1, TEXT2; the even field includes 5 channels, CC3, CC4, TEXT3, TEXT4, XDS (Extended Data services ). CC1, CC2, CC3, CC4 can be used for transmitting different language characters, the content is mainly the dialect of the person in the image, can display the corresponding characters near the mouth of the speaker when using; TEXT1, TEXT2, TEXT3, TEXT4 are mainly used for transmitting some information, such as weather forecast, news, etc.; XDS is generally used for transmitting time information, television network information, names of current television programs, etc., and data transmitted by the XDS is mainly used for V-CHIP (program rating). The closed caption CC mainly follows two standards: CEA-708 and EIA-708 (CEA-708) standards.
The data stream of the ATSC closed caption resource is not transmitted by a separate one of the packet identifier PID data streams, but is transmitted in moving picture experts group-2 (MPEG-2) image user data (MPEG-2 Picture User Data). As shown in FIG. 2, FIG. 2 shows the transmission structure of digital video, program Map Table (PMT), event Information Table (EIT), audio and other data and synchronization information in a digital television bitstream under the CEA-708 standard. The bit stream of the digital television comprises audio data, video data and control data, wherein the control data is responsible for controlling the playing of the audio data and the video data. As can be seen in fig. 1, digital television closed caption (DTVCC) service data, image user data (Picture User Date) including caption text, window instructions, etc. are encapsulated in video data.
Then, for such a closed caption video stream, when the video stream is played at the decoder of the receiving end, a special hardware module for decoding the video stream of such a digital television standard needs to be separately designed for smooth video stream decoding and playing, as shown in fig. 3. FIG. 3 shows a block diagram of a receiver 300 of the CEA-708 digital television standard. The receiver 300 includes a tuner 101 for receiving radio frequency signals and is responsible for frequency conversion, filtering, automatic gain control, etc.; a demodulator 102 for demodulating the data output from the tuner 101; a demultiplexer 303 for demultiplexing the data output from the demodulator 102 into separate ATSC-audio data and ATSC-video data; and a video decoder 304 for re-decoding the ATSC-video data and separating the ATSC-closed caption CC data.
It can be seen that such closed caption data extraction is a special data stream acquisition through a special channel (e.g., the additional video decoder 304). However, this makes the resource acquisition mode of the software application layer non-uniform, and the hardware architecture is complex.
The application hopes to unify the resource acquisition modes of the software application layer, and the hardware architecture becomes simple and clear.
Fig. 4 shows a block diagram of a video data stream decoding system according to an embodiment of the application.
As shown in fig. 4, the video data stream decoding system 400 includes: a demultiplexer 401 configured to demultiplex first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; a first video decoder 402 connected to the demultiplexer 401, the first video decoder 402 being configured to decode the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separate out first subtitle data, and to transmit the first subtitle data to the demultiplexer 401; wherein the demultiplexer 401 is configured to output first subtitle data, first video stream data based on a first digital television standard, first audio stream data based on the first digital television standard.
Fig. 5 shows an exemplary diagram of an application of the decoding system according to the embodiment of fig. 4.
As shown in fig. 5, the tuner 101 is configured to receive radio frequency signals and is responsible for frequency conversion, filtering, automatic gain control, and other functions. The demodulator 102 is configured to demodulate the data output from the tuner 101 to obtain a bit stream based on the first digital television standard. The demultiplexer 401 is configured to demultiplex the first video stream data based on the first digital television standard and the first audio stream data based on the first digital television standard from the received bit stream based on the first digital television standard. The first video decoder 402 is for decoding the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separating out first subtitle data, and transmitting the first subtitle data to the demultiplexer 401. The demultiplexer 401 outputs first subtitle data, first video stream data based on the first digital television standard, and first audio stream data based on the first digital television standard.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the consumer electronics association CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not needed, so that compatibility is increased, and the resource acquisition mode of the software application layer is unified, and the hardware architecture becomes simple and clear.
Of course, the process of demultiplexing the code stream based on the second digital television standard by the demultiplexer 401 is also shown in fig. 5. Here, the second digital television standard is different from the first digital television standard, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed, for example, most digital television standards: the digital video broadcasting DVB standard, or the integrated services digital broadcasting ISDB standard.
The demultiplexer 401 may identify the second audio stream data, the second video stream data, the second subtitle data based on the second digital television standard from the code stream based on the second digital television standard by the respective packet identifier PID in a conventional manner and separate the individual second audio stream data, the second video stream data, the second subtitle data. That is, the second audio stream data, the second video stream data, and the second subtitle data based on the second digital television standard are respectively set with the corresponding package identifiers PID so that the demultiplexer 401 knows how to separate them.
For example, the second audio stream data is set with a packetizing identifier PID of 1111, the second video stream data is set with a packetizing identifier PID of 2222, and the second subtitle data is respectively set with a packetizing identifier PID of 3333. Accordingly, the demultiplexer 401 finds a code stream having a PID of 1111 to identify as second audio stream data, a code stream having a PID of 2222 to identify as second video stream data, and a code stream having a PID of 3333 to identify as second subtitle data.
Next, a specific process of separating the first subtitle data from the first video decoder 402 and the demultiplexer encapsulating the first subtitle data will be described in detail.
Fig. 6 shows a schematic diagram of the transport stream format of mpeg-2 as a bit stream of a digital television DTV, i.e. the transport stream format of the first digital television standard.
The transport stream format includes a packet identifier PID, which is a bit string of 13 bits in length. Different packet identifiers PID are set for the audio stream, the video stream and the control stream, respectively. Accordingly, the demultiplexer 401 can separate the first video stream data and the first audio stream data converted into the packetized elementary stream PES data format and the first control stream data from the transport stream (specifically, data_byte in the transport stream format) of mpeg-2 according to the respective different packetization identifiers PID.
Fig. 7 shows a schematic diagram of a thumbnail version of the packetized elementary stream PES data format with part of the format omitted.
The first video decoder 402 acquires elementary stream ES video data from the first video stream data in the PES data format shown in fig. 7. Fig. 8 shows a schematic diagram of a thumbnail version of the elementary stream ES data format with a partial format omitted.
The first video decoder 402 acquires user data where the first subtitle data is located from the elementary stream ES video data, and finally converts the user data into first subtitle data, for example, closed caption CC data. Fig. 9 shows a schematic diagram of a data format of closed caption CC data. Here, the first subtitle data obtained by the first video decoder 402 is not in a standard transport stream format.
In order for the demultiplexer 401 to still separate the first subtitle data, e.g. the closed caption CC data, by the packing identifier PID, the demultiplexer 401 is further configured to pack the first subtitle data with a first subtitle identifier into a first subtitle identifier PID data stream, wherein the first subtitle identifier is different from a second subtitle identifier of the second subtitle data based on the second digital television standard.
Here, in one embodiment, the demultiplexer 401 may distinguish the first subtitle data from the second subtitle data based on the second digital television standard in such a way that it is known to be packetized because the other second subtitle data is already packetized data, and the demultiplexer 401 is not required to packetize again. Thus, since the respective packing identifiers for the code streams based on the second digital television standard are 13-bit binary digits, i.e., range 0-8191 (i.e., 0 to 0x1 FFF), the first video decoder 402 may add a parameter to the first subtitle data, for example, a parameter greater than 8191, for example, 8192, and when the demultiplexer 401 receives the first subtitle data to which 8192 is added, it does not treat it as the second subtitle data having a range of 0-8191, but also packets the first subtitle data with the first subtitle identifier PID. Of course, this is not necessary, and the demultiplexer 401 may directly packetize the data stream with a PID if it determines that the data stream does not have the PID when receiving the first subtitle data obtained by the first video decoder 402.
How to acquire the first subtitle data from the packetized elementary stream PES is described above in connection with fig. 7 to 9, the demultiplexer 40 may packetize the acquired first subtitle data into a first subtitle identifier PID data stream with the first subtitle identifier reversing the process from data to elementary stream ES to packetized elementary stream PES. The detailed packing process is not described here in detail.
Here, the first caption identifier may be distinguished from the respective package identifiers that the demultiplexer 401 originally uses to separate the code stream based on the second digital television standard, so that the demultiplexer 401 may distinguish the first caption data of the first digital television standard different from the second digital television standard. For example, if the respective packet identifiers for the second digital television standard based code stream are some 13-bit binary digits, i.e., some of the ranges 0-8191 (i.e., 0 to 0x1 FFF), such as 2222, 3333, then the first caption identifier may be set to digits other than these digits, such as greater than 4444, and may be separated as normal caption data. Of course, the setting of the first subtitle identifier is not limited thereto as long as the demultiplexer 401 is enabled to correctly distinguish the first subtitle data of the first digital television standard different from the second digital television standard. Of course, if in the above-described embodiment, the demultiplexer 401 may directly packetize the data stream with a PID upon receiving the first subtitle data obtained by the first video decoder 402, where the PID may be directly set to a PID different from the conventional PID, for example 8192.
Of course, the PID and the packetizing process are both examples, only for less changing the interface parameters of the demultiplexer, but not limited thereto, and in fact, the demultiplexer may be implemented in other ways to perform the purpose of packetizing the subtitle data decoded by the first video decoder, which is not expanded one by one.
Then, the demultiplexer 401 is configured to separate the first subtitle data from the first subtitle identifier PID data stream according to the first subtitle identifier after receiving the first subtitle identifier PID data stream transmitted from the first video decoder 402.
As before, the first video stream data is also assigned a first video stream identifier PID and the first audio stream data is assigned a first audio stream identifier PID, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively.
For example, the first video stream identifier PID is 4567, the first audio stream identifier PID is 6789, and the first subtitle identifier is 8192. As such, separating the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively, includes: the code stream having the PID of 4567 is found to identify as the first video stream data, the code stream having the PID of 6789 is found to identify as the first audio stream data, and the code stream having the PID of 8192 is found to identify as the first subtitle data.
Here, it can be seen that the hardware structure and function of the demultiplexer 401 are the same as those of most digital television standards, and the hardware structure and function of the demultiplexer 401 are compatible with a first digital television standard different from the most digital television standards, for example, the CEA-708 standard of the consumer electronics association of ATSC.
In summary, the first video decoder 402 ruminant returns the separated first subtitle data to the demultiplexer 401 and repackages the separated first subtitle data into a PID data stream of a specific first digital television standard, so that the entire system acquires the first digital television standard, e.g., ATSC-closed caption CC data, in a standard unified manner.
It can be seen that, only one first video decoder 402 is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not needed, so that compatibility is increased, and the resource acquisition mode of the software application layer is unified, and the hardware architecture becomes simple and clear.
Fig. 10 shows a flowchart of a video data stream decoding method according to an embodiment of the present application.
The video data stream decoding method 1000 shown in fig. 10 includes: step 1001, demultiplexing, by a demultiplexer, first video stream data based on a first digital television standard from a received bit stream based on the first digital television standard; step 1002, decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from a demultiplexer and separating out first subtitle data, and transmitting the first subtitle data to the demultiplexer; in step 1003, the demultiplexer outputs the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the consumer electronics association CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not needed, so that compatibility is increased, and the resource acquisition mode of the software application layer is unified, and the hardware architecture becomes simple and clear.
In one embodiment of the present application, step 1002 may include packaging, by a demultiplexer, the first subtitle data with the first subtitle identifier into a first subtitle identifier data stream. Wherein the first subtitle identifier is different from a second subtitle identifier of second subtitle data based on a second digital television standard.
In one embodiment of the present application, step 1003 may include: the first subtitle data is separated by a demultiplexer according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier and the first audio stream data is assigned a first audio stream identifier, wherein step 1003 may include: the first subtitle data, the first video stream data, and the first audio stream data are separated by the demultiplexer according to the first subtitle identifier, the first video stream identifier, and the first audio stream identifier, respectively.
In one embodiment of the present application, the first digital television standard may be a standard in which subtitle data is embedded in a video stream, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are separately time-division multiplexed.
In one embodiment of the application, the first digital television standard may be the American advanced television System committee ATSC with Consumer electronics Association CEA-708 caption standard, and the second digital television standard may be the digital video broadcast DVB standard, or the Integrated services digital broadcast ISDB standard.
In summary, the separated first subtitle data is ruminally transmitted back to the demultiplexer by the first video decoder and repackaged into a PID data stream of a specific first digital television standard, so that the entire system acquires the first digital television standard, for example, ATSC-closed caption CC data, in a standard unified manner.
It can be seen that only one first video decoder is added, without changing the hardware structure and functions of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard, in which the caption data is embedded in the video stream, can be compatibly decoded, and steps such as special data stream acquisition and processing for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
FIG. 11 illustrates a block diagram of an exemplary computer system suitable for use in implementing embodiments of the present application.
The computer system may include a processor (H1); a memory (H2) coupled to the processor (H1) and having stored therein computer executable instructions for performing the steps of the methods of the embodiments of the present application when executed by the processor.
The processor (H1) may include, but is not limited to, for example, one or more processors or microprocessors or the like.
The memory (H2) may include, for example, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a computer storage medium (e.g., hard disk, a floppy disk, a solid state disk, a removable disk, a CD-ROM, a DVD-ROM, a blu-ray disc, etc.).
In addition, the computer system may include a data bus (H3), an input/output (I/O) bus (H4), a display (H5), and an input/output device (H6) (e.g., keyboard, mouse, speaker, etc.), etc.
The processor (H1) may communicate with external devices (H5, H6, etc.) via a wired or wireless network (not shown) through an I/O bus (H4).
The memory (H2) may also store at least one computer executable instruction for performing the functions and/or steps of the methods in the embodiments described in the present technology when executed by the processor (H1).
In one embodiment, the at least one computer-executable instruction may also be compiled or otherwise formed into a software product in which one or more computer-executable instructions, when executed by a processor, perform the functions and/or steps of the methods described in the embodiments of the technology.
Fig. 12 shows a schematic diagram of a non-transitory computer-readable storage medium according to an embodiment of the disclosure.
As shown in fig. 12, the computer-readable storage medium 1220 has instructions stored thereon, such as computer-readable instructions 1210. When executed by a processor, the computer-readable instructions 1210 may perform the various methods described with reference to the above. Computer-readable storage media include, but are not limited to, volatile memory and/or nonvolatile memory, for example. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. For example, the computer-readable storage medium 1220 may be connected to a computing device such as a computer, and then the various methods described above may be performed where the computing device runs the computer-readable instructions 1210 stored on the computer-readable storage medium 1220.
Of course, the above-described specific embodiments are merely examples, and those skilled in the art may combine and combine some steps and means from the above-described embodiments separately to achieve the effects of the present invention according to the concept of the present invention, and such combined and combined embodiments are also included in the present invention, and such combination and combination are not described herein one by one.
Note that advantages, effects, and the like mentioned in this disclosure are merely examples and are not to be construed as necessarily essential to the various embodiments of the invention. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the invention is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The step flow diagrams in this disclosure and the above method descriptions are merely illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. The order of steps in the above embodiments may be performed in any order, as will be appreciated by those skilled in the art. Words such as "thereafter," "then," "next," and the like are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of these methods. Furthermore, any reference to an element in the singular, for example, using the articles "a," "an," or "the," is not to be construed as limiting the element to the singular.
In addition, the steps and means in the various embodiments herein are not limited to practice in a certain embodiment, and indeed, some of the steps and some of the means associated with the various embodiments herein may be combined according to the concepts of the present invention to contemplate new embodiments, which are also included within the scope of the present invention.
The individual operations of the above-described method may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software components and/or modules including, but not limited to, circuitry for hardware, an Application Specific Integrated Circuit (ASIC), or a processor.
The various illustrative logical blocks, modules, and circuits described herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that may be used include Random Access Memory (RAM), read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, and so forth. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across several storage media.
The methods disclosed herein include one or more acts for implementing the described methods. The methods and/or acts may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of acts is specified, the order and/or use of specific acts may be modified without departing from the scope of the claims.
The functions described above may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. As used herein, discs (disks) and disks include Compact Disks (CDs), laser disks, optical disks, digital Versatile Disks (DVDs), floppy disks, and blu-ray disks where disks usually reproduce data magnetically, while disks reproduce data optically with lasers.
Thus, the computer program product may perform the operations presented herein. For example, such a computer program product may be a computer-readable tangible medium having instructions tangibly stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. The computer program product may comprise packaged material.
The software or instructions may also be transmitted over a transmission medium. For example, software may be transmitted from a website, server, or other remote source using a transmission medium such as a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, or microwave.
Furthermore, modules and/or other suitable means for performing the methods and techniques described herein may be downloaded and/or otherwise obtained by the user terminal and/or base station as appropriate. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein may be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk, etc.) so that the user terminal and/or base station can obtain the various methods when coupled to or providing storage means to the device. Further, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
Other examples and implementations are within the scope and spirit of the disclosure and the appended claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hardwired or any combination of these. Features that implement the functions may also be physically located at various locations including being distributed such that portions of the functions are implemented at different physical locations. Also, as used herein, including in the claims, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C), for example. Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
Various changes, substitutions, and alterations are possible to the techniques described herein without departing from the techniques of the teachings, as defined by the appended claims. Furthermore, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the invention to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.