Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that they are not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or functional arrangement, and that any functional block or functional arrangement may be implemented as a physical entity or a logical entity, or a combination of both.
In order that those skilled in the art will better understand the present invention, the following detailed description of the invention is provided in conjunction with the accompanying drawings and the detailed description of the invention.
Note that the example to be described next is only a specific example, and is not intended as a limitation on the embodiments of the present invention, and specific shapes, hardware, connections, steps, numerical values, conditions, data, orders, and the like, are necessarily shown and described. Those skilled in the art can, upon reading this specification, utilize the concepts of the present invention to construct more embodiments than those specifically described herein.
DVB and ISDB individually define the specification and format of a subtitle, and a developer usually obtains a Packetized Elementary Stream (PES) data stream of the subtitle by allocating a PID (packet identifier) filter to a demultiplexer (Demux), and then decodes and renders the PES data stream to be finally displayed for a user to browse. That is, at the transmitting end, the subtitle data of the video is separated from the video data and the audio data and is time-division multiplexed, so that at the receiving end, the code stream is demultiplexed, and the separated subtitle data, video data and audio data are separated. This receiver hardware is compatible with more digital television standards.
Fig. 1 shows a block diagram of a receiver 100 of the DVB digital television standard. The receiver 100 includes a tuner 101 for receiving a radio frequency signal and performing frequency conversion, filtering, and automatic gain control functions; a demodulator 102, configured to demodulate data output by the tuner 101; and a demultiplexer 103 for demultiplexing the data output from the demodulator 102 and decomposing the data into individual DVB-audio data, DVB-video data and DVB-subtitle data by respective PID identifiers.
It can be seen that for such digital television streams, the DVB-subtitle data originates directly from the demultiplexer 103 via its own packet identifier PID.
However, with some exceptions to digital television standards, such as the consumer electronics association/electronics association (CEA/EIA) -708 standard adopted in the ATSC standard, the subtitle data is not separately packetized, but embedded in the video data stream, commonly referred to as ATSC Closed Caption (CC). The closed caption CC is called "caption for hearing impaired people" because it describes all sounds and dialogs in video by words or symbols, especially sounds such as "tap the door", "stream sound", etc., which do not exist in general captions (subtitles) of DVB, ISDB, which describe dialogs only by words.
Closed caption CC data may be transmitted by 9 channels: the odd field comprises 4 channels, CC1, CC2, TEXT1, TEXT 2; the even field includes 5 channels, CC3, CC4, TEXT3, TEXT4, XDS (Extended Data services). The CC1, the CC2, the CC3 and the CC4 can be used for transmitting characters in different languages, the content is mainly the dialogues of the characters in the images, and the corresponding characters can be displayed near the mouth of the speaker when in use; TEXT1, TEXT2, TEXT3, TEXT4 are mainly used for transmitting some information, such as weather forecast, news, etc.; XDS is generally used to transmit time information, tv network information, a name of a current tv program, etc., and transmits data mainly for V-CHIP (program rating) use. Closed captioning CC mainly follows two standards: CEA-708 and EIA-708(CEA-708) standards.
The Data stream of the ATSC closed caption resource is not carried by a single packet identifier PID Data stream, but is carried in moving Picture experts group-2 (MPEG-2) Picture User Data (MPEG-2Picture User Data). As shown in fig. 2, fig. 2 shows the transmission structure of digital video, Program Map Table (PMT), Event Information Table (EIT), audio and other data and synchronization information in digital tv code stream under CEA-708 standard. The bit stream of the digital television comprises audio data, video data and control data, wherein the control data is responsible for controlling the playing of the audio data and the video data. As can be seen in fig. 1, digital television closed caption (DTVCC) service data, including caption text, window instructions, etc., is encapsulated in Picture User data (Picture User Date) in video data.
For such closed caption video stream, when performing video stream playing at the decoder of the receiving end, a special hardware module for decoding the video stream of such digital television standard needs to be separately designed for smooth video stream decoding and playing, as shown in fig. 3. Fig. 3 shows a block diagram of a receiver 300 of the CEA-708 digital television standard. The receiver 300 includes a tuner 101 for receiving a radio frequency signal and performing frequency conversion, filtering, and automatic gain control; a demodulator 102, configured to demodulate data output by the tuner 101; a demultiplexer 303, configured to demultiplex data output by the demodulator 102, and decompose the data into individual ATSC-audio data and ATSC-video data; and a video decoder 304 for re-decoding the ATSC-video data and separating the ATSC-closed caption CC data.
It can be seen that such extraction of closed caption data is a special data stream acquisition by a special channel (e.g., additional video decoder 304). However, this makes the software application layer resource acquisition mode non-uniform, and the hardware architecture is more complex.
According to the method and the device, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
Fig. 4 shows a block diagram of a video data stream decoding system according to an embodiment of the present application.
As shown in fig. 4, the video data stream decoding system 400 includes: a demultiplexer 401 configured to demultiplex first video stream data based on the first digital television standard from the received bit stream based on the first digital television standard; a first video decoder 402 connected to the demultiplexer 401, the first video decoder 402 configured to decode the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separate first subtitle data, and transmit the first subtitle data to the demultiplexer 401; wherein the demultiplexer 401 is configured to output the first subtitle data, the first video stream data based on the first digital television standard, and the first audio stream data based on the first digital television standard.
Fig. 5 shows an exemplary diagram of an application of a decoding system according to the embodiment of fig. 4.
As shown in fig. 5, the tuner 101 is used for receiving a radio frequency signal and is responsible for frequency conversion, filtering, and automatic gain control. The demodulator 102 is configured to demodulate data output from the tuner 101 to obtain a bitstream according to the first digital television standard. The demultiplexer 401 is configured to demultiplex first video stream data based on the first digital television standard and first audio stream data based on the first digital television standard from the received bit stream based on the first digital television standard. The first video decoder 402 is configured to decode the first video stream data based on the first digital television standard demultiplexed from the demultiplexer 401 and separate out the first subtitle data, and transmit the first subtitle data to the demultiplexer 401. The demultiplexer 401 outputs first subtitle data, first video stream data based on a first digital television standard, and first audio stream data based on the first digital television standard.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and function of the conventional tuner, demodulator, and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of acquiring and processing the special data stream specifically for ATSC-closed caption CC data and the hardware module shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
Of course, the process of demultiplexing the code stream based on the second digital television standard by the demultiplexer 401 is also shown in fig. 5. Here, the second digital television standard is different from the first digital television standard, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are individually time-division multiplexed, such as most digital television standards: the DVB standard for digital video broadcasting, or the ISDB standard for integrated services digital broadcasting.
The demultiplexer 401 may identify the second audio stream data, the second video stream data, and the second subtitle data based on the second digital television standard from the code stream based on the second digital television standard through respective packetization identifiers PID in a conventional manner, and separate the second audio stream data, the second video stream data, and the second subtitle data. That is, the second audio stream data, the second video stream data, and the second subtitle data based on the second digital television standard are respectively set with corresponding packetization identifiers PID so that the demultiplexer 401 knows how to separate them.
For example, the second audio stream data is set with a packetization identifier PID of 1111, the second video stream data is set with a packetization identifier PID of 2222, and the second subtitle data is set with a packetization identifier PID of 3333, respectively. Accordingly, the demultiplexer 401 finds a stream having a PID of 1111 to identify as the second audio stream data, finds a stream having a PID of 2222 to identify as the second video stream data, and finds a stream having a PID of 3333 to identify as the second subtitle data.
Next, a specific process of separating the first subtitle data from the first video decoder 402 and packaging the first subtitle data by the demultiplexer will be described in detail.
Fig. 6 shows a schematic diagram of the transport stream format of mpeg-2 as a bitstream of a digital television DTV, i.e. the transport stream format of the first digital television standard.
The transport stream format includes a packet identifier PID, which is a bit string of 13 bits in length. For an audio stream, a video stream, and a control stream, different packet identifiers PID are set, respectively. Accordingly, the demultiplexer 401 may separate the first video stream data and the first audio stream data converted into the packetized elementary stream PES data format and the first control stream data from the transport stream (specifically, data _ byte in the transport stream format) of the mpeg-2 according to respectively different packetizing identifiers PID.
Fig. 7 shows a schematic diagram of a thumbnail version of the packetized elementary stream PES data format with the partial format omitted.
The first video decoder 402 acquires elementary-stream ES video data from the first video stream data in the PES data format as shown in fig. 7. Fig. 8 shows a schematic diagram of a thumbnail version of the elementary stream ES data format with a partial format omitted.
The first video decoder 402 obtains the user data where the first subtitle data is located from the elementary stream ES video data, and finally converts the user data into the first subtitle data, for example, closed caption CC data. Fig. 9 shows a schematic diagram of a data format of closed caption CC data. Here, the first subtitle data obtained by the first video decoder 402 is not in a standard transport stream format.
In order for the demultiplexer 401 to still separate the first subtitle data, e.g., the closed caption CC data, by the pack identifier PID, the demultiplexer 401 is further configured to pack the first subtitle data into a first subtitle identifier PID data stream with a first subtitle identifier, wherein the first subtitle identifier is different from a second subtitle identifier of the second subtitle data based on the second digital television standard.
Here, in one embodiment, the demultiplexer 401 may distinguish the first subtitle data from the second subtitle data based on the second digital television standard in such a way that it is known to pack the first subtitle data, because the other second subtitle data is already packed data and does not need to be packed by the demultiplexer 401. Therefore, since each packetization identifier for a bitstream based on the second digital television standard is a 13-bit binary number, i.e., a range is 0 to 8191 (i.e., 0 to 0x1FFF), the first video decoder 402 can add a parameter, e.g., a parameter greater than 8191, e.g., 8192, to the first subtitle data, and when the demultiplexer 401 receives the 8192-added first subtitle data, it is not treated as the second subtitle data having a range of 0 to 8191, but rather the first subtitle data is packetized by the first subtitle identifier PID. Of course, this is not necessary, and the demultiplexer 401 may also determine that the data stream has no PID when receiving the first subtitle data obtained by the first video decoder 402, and directly packetize the data stream by using the PID.
As described above in connection with fig. 7-9, how to obtain the first subtitle data from the packetized elementary stream PES, the demultiplexer 40 may pack the obtained first subtitle data from the data to the elementary stream ES to the packetized elementary stream PES as the first subtitle identifier PID data stream using the first subtitle identifier in reverse. The detailed packing process is not described herein in detail.
Here, the first caption identifier may be distinguished from each packetization identifier originally used by the demultiplexer 401 to separate the code stream based on the second digital television standard, so that the demultiplexer 401 may distinguish the first caption data of the first digital television standard different from the second digital television standard. For example, if the respective packetization identifiers for the codestream based on the second digital television standard are some 13-bit binary digits, i.e., ranging from 0 to 8191 (i.e., 0 to 0x1FFF), e.g., 2222, 3333, then the first caption identifier may be set to other numbers than these, e.g., greater than 4444, and may be separated as normal caption data. Of course, the setting of the first subtitle identifier is not limited thereto as long as the demultiplexer 401 is enabled to correctly distinguish the first subtitle data of the first digital television standard different from the second digital television standard. Of course, if in the above embodiment, the demultiplexer 401 can directly pack the data stream with a PID upon receiving the first subtitle data obtained by the first video decoder 402, the PID here can be directly set to a PID different from the conventional PID, for example, 8192.
Of course, the above PID and packetizing processes are examples only for changing the interface parameters of the demultiplexer less, but are not limited thereto, and in fact, the demultiplexer may also achieve the purpose of packetizing the subtitle data decoded by the first video decoder by other means, which is not necessarily developed here.
Then, the demultiplexer 401 is configured to separate the first subtitle data from the first subtitle identifier PID data stream according to the first subtitle identifier after receiving the first subtitle identifier PID data stream transmitted from the first video decoder 402.
As before, the first video stream data is also assigned a first video stream identifier PID and the first audio stream data is assigned a first audio stream identifier PID, wherein the demultiplexer separates the first subtitle data, the first video stream data, and the first audio stream data, respectively, based on the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively.
For example, the first video stream identifier PID is 4567, the first audio stream identifier PID is 6789, and the first subtitle identifier is 8192. As such, separating the first subtitle data, the first video stream data, and the first audio stream data according to the first subtitle identifier PID, the first video stream identifier PID, and the first audio stream identifier PID, respectively, includes: a code stream having PID of 4567 is found to be identified as the first video stream data, a code stream having PID of 6789 is found to be identified as the first audio stream data, and a code stream having PID of 8192 is found to be identified as the first subtitle data.
Here, it can be seen that the demultiplexer 401 has the same hardware structure and function as the demultiplexer of most digital television standards, and without extensive modifications to the demultiplexer 401, the hardware structure and function thereof can be compatible with a first digital television standard different from most digital television standards, such as the CEA-708 standard of the consumer electronics association of the ATSC in the united states.
In summary, the first video decoder 402 sends the separated first caption data to the demultiplexer 401 in a ruminal manner, and re-encapsulates the separated first caption data into a PID data stream of a specific first digital television standard, so that the entire system acquires the first digital television standard, such as ATSC-closed caption CC data, in a standard and unified manner.
It can be seen that only one first video decoder 402 is added, without changing the hardware structure and function of the conventional tuner, demodulator and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of special data stream acquisition and processing and the like specially for ATSC-closed caption CC data and hardware modules as shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
Fig. 10 shows a flow chart of a method of decoding a video data stream according to an embodiment of the present application.
The video data stream decoding method 1000 shown in fig. 10 includes: step 1001, demultiplexing first video stream data based on a first digital television standard from a received bit stream based on the first digital television standard by a demultiplexer; step 1002, decoding, by a first video decoder, first video stream data based on a first digital television standard demultiplexed from a demultiplexer and separating first subtitle data, and sending the first subtitle data to the demultiplexer; step 1003, outputting the first subtitle data, the first video stream data based on the first digital television standard and the first audio stream data based on the first digital television standard by a demultiplexer.
Here, the first digital television standard may be a standard for embedding subtitle data in a video stream, such as the CEA-708 standard of the american ATSC.
Here, it can be seen that only one first video decoder 402 is added, without changing the hardware structure and function of the conventional tuner, demodulator, and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of acquiring and processing the special data stream specifically for ATSC-closed caption CC data and the hardware module shown in fig. 3 are not required, so that compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
In one embodiment of the present application, step 1002 may comprise packetizing, by the demultiplexer, the first subtitle data into a first subtitle identifier data stream with the first subtitle identifier. Wherein the first caption identifier is different from a second caption identifier of second caption data based on a second digital television standard.
In one embodiment of the present application, step 1003 may include: the first subtitle data is separated by a demultiplexer according to the first subtitle identifier.
In one embodiment of the present application, the first video stream data is assigned a first video stream identifier, and the first audio stream data is assigned a first audio stream identifier, wherein step 1003 may include: and separating the first subtitle data, the first video stream data and the first audio stream data respectively according to the first subtitle identifier, the first video stream identifier and the first audio stream identifier by a demultiplexer.
In one embodiment of the present application, the first digital television standard may be a standard in which subtitle data is embedded in a video stream, and the second digital television standard may be a standard in which subtitle data, video stream data, and audio stream data are individually time-division multiplexed.
In one embodiment of the present application, the first digital television standard may be the advanced television systems committee for america ATSC with the consumer electronics association CEA-708 subtitle standard, and the second digital television standard may be the digital video broadcasting DVB standard, or the integrated services digital broadcasting ISDB standard.
In summary, the first video decoder sends the separated first caption data to the demultiplexer in a ruminal manner, and encapsulates the separated first caption data into a PID data stream of the specific first digital television standard again, so that the entire system acquires the first digital television standard, such as ATSC-closed caption CC data, in a standard unified manner.
It can be seen that only one first video decoder is added, without changing the hardware structure and function of the traditional tuner, demodulator and demultiplexer, the code stream of the first digital television standard in which the caption data is embedded in the video stream can be compatibly decoded, and the steps of special data stream acquisition and processing and the like specially aiming at the ATSC-closed caption CC data and the hardware module as shown in fig. 3 are also not needed, so that the compatibility is increased, the software application layer resource acquisition mode is unified, and the hardware architecture becomes simple and clear.
FIG. 11 illustrates a block diagram of an exemplary computer system suitable for use in implementing embodiments of the present application.
The computer system may include a processor (H1); a memory (H2) coupled to the processor (H1) and having stored therein computer-executable instructions for performing, when executed by the processor, the steps of the respective methods of embodiments of the present application.
The processor (H1) may include, but is not limited to, for example, one or more processors or microprocessors or the like.
The memory (H2) may include, but is not limited to, for example, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, computer storage media (e.g., hard disk, floppy disk, solid state disk, removable disk, CD-ROM, DVD-ROM, Blu-ray disk, and the like).
In addition, the computer system may include a data bus (H3), an input/output (I/O) bus (H4), a display (H5), and an input/output device (H6) (e.g., a keyboard, a mouse, a speaker, etc.), among others.
The processor (H1) may communicate with external devices (H5, H6, etc.) via a wired or wireless network (not shown) over an I/O bus (H4).
The memory (H2) may also store at least one computer-executable instruction for performing, when executed by the processor (H1), the functions and/or steps of the methods in the embodiments described in the present technology.
In one embodiment, the at least one computer-executable instruction may also be compiled or combined into a software product, where the one or more computer-executable instructions, when executed by the processor, perform the functions and/or steps of the method in the embodiments described in the present technology.
Fig. 12 shows a schematic diagram of a non-transitory computer-readable storage medium according to an embodiment of the present disclosure.
As shown in FIG. 12, computer-readable storage media 1220 has instructions stored thereon, such as computer-readable instructions 1210. The computer readable instructions 1210, when executed by a processor, may perform the various methods described with reference to the above. Computer-readable storage media include, but are not limited to, volatile memory and/or nonvolatile memory, for example. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. For example, the computer-readable storage medium 1220 may be connected to a computing device, such as a computer, and then the various methods described above may be performed with the computing device executing the computer-readable instructions 1210 stored on the computer-readable storage medium 1220.
Of course, the above-mentioned embodiments are merely examples and not limitations, and those skilled in the art can combine and combine some steps and apparatuses from the above-mentioned separately described embodiments to achieve the effects of the present invention according to the concepts of the present invention, and such combined and combined embodiments are also included in the present invention, and such combined and combined embodiments are not necessarily described herein.
It is noted that advantages, effects, and the like, which are mentioned in the present disclosure, are only examples and not limitations, and they are not to be considered essential to various embodiments of the present invention. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the invention is not limited to the specific details described above.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The flowchart of steps in the present disclosure and the above description of methods are merely illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by those skilled in the art, the order of the steps in the above embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are only used to guide the reader through the description of these methods. Furthermore, any reference to an element in the singular, for example, using the articles "a," "an," or "the" is not to be construed as limiting the element to the singular.
In addition, the steps and devices in the embodiments are not limited to be implemented in a certain embodiment, and in fact, some steps and devices in the embodiments may be combined according to the concept of the present invention to conceive new embodiments, and these new embodiments are also included in the scope of the present invention.
The individual operations of the methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software components and/or modules including, but not limited to, a hardware circuit, an Application Specific Integrated Circuit (ASIC), or a processor.
The various illustrative logical blocks, modules, and circuits described may be implemented or described with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that may be used include Random Access Memory (RAM), Read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, and the like. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
The methods disclosed herein comprise one or more acts for implementing the described methods. The methods and/or acts may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and/or use of specific actions may be modified without departing from the scope of the claims.
The above-described functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. As used herein, disk (disk) and disc (disc) includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Accordingly, a computer program product may perform the operations presented herein. For example, such a computer program product may be a computer-readable tangible medium having instructions stored (and/or encoded) thereon that are executable by one or more processors to perform the operations described herein. The computer program product may include packaged material.
Software or instructions may also be transmitted over a transmission medium. For example, the software may be transmitted from a website, server, or other remote source using a transmission medium such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, or microwave.
Further, modules and/or other suitable means for carrying out the methods and techniques described herein may be downloaded and/or otherwise obtained by a user terminal and/or base station as appropriate. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk) so that the user terminal and/or base station can obtain the various methods when coupled to or providing storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
Other examples and implementations are within the scope and spirit of the disclosure and the following claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hard-wired, or any combination of these. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that a list of "A, B or at least one of C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the invention to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.