WO2011015080A1 - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
WO2011015080A1
WO2011015080A1 PCT/CN2010/073670 CN2010073670W WO2011015080A1 WO 2011015080 A1 WO2011015080 A1 WO 2011015080A1 CN 2010073670 W CN2010073670 W CN 2010073670W WO 2011015080 A1 WO2011015080 A1 WO 2011015080A1
Authority
WO
WIPO (PCT)
Prior art keywords
user data
channel
queue
data
service
Prior art date
Application number
PCT/CN2010/073670
Other languages
French (fr)
Chinese (zh)
Inventor
舒骏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2011015080A1 publication Critical patent/WO2011015080A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority

Abstract

Disclosed are a data processing method and a device thereof. The method includes: creating a queue for each type of services respectively, and setting a priority for each queue; placing user data into the queue corresponding to the service type of the user data, wherein user data includes one or more channel data; for each queue, selecting user data from the queue to process according to a predetermined principle. Through the present invention, the service processing delay of the High-Speed Uplink Packet Access (HUSPA) is reduced effectively, and the system performance is enhanced.

Description

数据处理方法;^置 技术领域 本发明涉及通信领域, 具体而言, 涉及一种数据处理方法及装置。 背景技术 在宽带码分多址系统 (Wideband-Code Division Multiple Access, 简称为 TECHNICAL FIELD The present invention relates to the field of communications, and in particular to a data processing method and apparatus. BACKGROUND OF THE INVENTION Wideband-Code Division Multiple Access (Wideband-Code Division Multiple Access, referred to as
WCDMA ) 无线通信系统中, 接收端通常釆用分离多径 ( RAKE )接收机装 置, 通过对多条多经进行解调, 最后进行最大比合并来恢复无线信号。 传统的基站基带处理器的 RAKE接收机, 在 WCDMA系统中通常有以 下两种实现方式。 方式一 专用物理控制信道( Dedicated Physical Control Channel,简称为 DPCCH ) 和专用物理数据信道( Dedicated Physical Data Channel, 简称为 DPDCH ) 同 时解调模式。在该模式中, 由于当前帧的 DPDCH的扩频因子( Spread Factor, 简称为 SF ) 是未知的, 只能按最小 SF进行解调, 等解调完一帧的 DPCCH 和 DPDCH后, 收集所有的传输格式合并指示 (Transport Format Combination Indicator, 简称为 TFCI)符号数据译码得到实际的 SF, 最后, 将已经解调完 成的这一帧 DPDCH再进行二次积分。 方式二 WCDMA) In a wireless communication system, a receiving multi-path (RAKE) receiver device is usually used at the receiving end to recover wireless signals by demodulating multiple multi-transmissions and finally performing maximum ratio combining. Conventional base station baseband processor RAKE receivers typically have the following two implementations in a WCDMA system. Mode 1 Dedicated Physical Control Channel (DPCCH) and Dedicated Physical Data Channel (DPDCH) are simultaneously demodulated. In this mode, since the Spread Factor (SF) of the DPDCH of the current frame is unknown, it can only be demodulated by the minimum SF. After demodulating the DPCCH and DPDCH of one frame, all the collected are collected. The Transport Format Combination Indicator (TFCI) symbol data is decoded to obtain the actual SF. Finally, the frame DPDCH that has been demodulated is re-integrated twice. Way two
DPDCH延迟 DPCCH—帧进行解调的延迟模式。 在该模式中, DPCCH 正常进行解调, 解调完一帧之后收集到所有的 TFCI符号数据译码得到实际 的 SF, 然后再开始进行 DPDCH解调, 这时 DPDCH解调可以按实际的 SF 进行。 在 R99协议中, 由于专用物理信道处理没有严格的时间要求, 同时, 系 统针对不同的业务要求, 有最小 SF的限制, 例如, 数据业务的最小 SF是 4, 对应的实际 SF的范围是 4、 8和 16。 因此, 釆用上述两种 RAKE接收机装 置, 延迟解调模式比同时解调模式更节约硬件资源, 因为可以使用实际的 SF 进行解调和数据存储, 而且减少了二次积分的环节, 因此, 大多数通信系统 都釆用延迟解调模式。 但是, 对于第三代合作伙伴计划 ( 3rd Generation Partnership Project, 简 称为 3GPP )的 R6协议及其后续的协议来说, 增加了上行高速业务, 这些上 行高速业务的接收处理, 在协议中做出了较为严格的规定, 在基站系统总的 处理延时大大缩小。 例如, R6 协议增加了上行增强专用物理信道 (Enhanced Dedicated Physical Channel, 简称为 E-DPCH), 该信道主要 载高速上行分 组接入(High Speed Uplink Packet Access, 简称为 HSUPA ) 的业务, 该信道 的处理有严格的定时限制。 才艮据 3GPP 规定, 对于 2ms 传输时间间隔 ( Transmission Time Interval, 简称为 TTI ) 的 HSUPA业务在基站系统总的 处理延时不能超过 8.3ms; 10ms传输时间间隔 ( TTI ) 的 HSUPA业务在基站 系统总的处理延时不能超过 24.3ms。 才艮据这个限制, 釆用延迟解调模式的 RAKE接收机装置来处理 E-DPCH, 无法满足系统处理时间限制的要求。 同 样延迟解调模式的 RAKE接收机装置对于 R6后续的协议新增的信道来说, 均无法满足系统处理时间限制的要求。 因此,应对新业务延时方面的严格要求,必须釆用同时解调模式的 RAKE 接收机。在这种情况,所有用户同时进行数据解调, 当每个用户完成一个 TTI 的数据解调之后, 并且得到实际的 SF, 就要进行二次解扩。 二次解扩是按照 用户进行处理的, 每次要处理完一个用户一个 TTI的数据量, 当数据量大的 时候, 处理时间会比较长。 如果大量时间被 R99 的业务所占据, 就会使得 HSUPA业务一直处于等待的状态, 这样不利于缩短 HSUPA业务的延时。 才艮据新的协议要求, 希望尽量缩短 HSUPA业务的处理延迟, 以上两种 RAKE接收机都不能艮好的完成这方面性能的优化。 必须对上述方案进行优 化, 改进 HSUPA业务处理延迟方面的性能。 发明内容 针对相关技术中存在的 HSUPA业务处理延时较长的问题而提出本发明, 为此, 本发明的主要目的在于提供一种改进的数据处理方案, 以解决上述问 题至少之一。 为了实现上述目的,根据本发明的一个方面,提供了一种数据处理方法。 才艮据本发明的一种数据处理方法包括: 分别为每种业务类型建立一个队 列, 并为每个队列设置一个优先级; 将用户数据放置在与用户数据的业务类 型对应的队列中, 其中, 用户数据包括一个或多个信道数据; 对于每个队列, 根据预定原则从中选择用户数据进行处理。 优选地,在将用户数据放置在与用户数据的业务类型对应的队列中之前, 上述方法还包括: 判断用户数据是否完整, 并在确定用户数据完整的情况下, 将用户数据放置在与用户数据的业务类型对应的队列中。 优选地, 上述方法还包括: 在判断用户数据是否完整之前, 以信道为单 位处理用户数据; 在判断出用户数据是完整的之后, 以用户为单位处理用户 数据。 优选地, 判断用户数据是否完整包括: 根据用户数据的信息和 /或历史记 录判断用户数据是否完整, 其中, 用户数据的信息至少包括以下之一: 信道 数据的信道号、用于信道合并的链路号、参与信道合并的信道数目和信道号、 信道合并中信道处理完成标志、 用于信道级联的用户号、 信道级联中单个信 道前后位置的指示、 参与信道级联的信道号。 优选地, 判断用户数据是否完整还包括: 判断对于用户数据是否需要进 行信道合并; 判断对于用户数据是否需要进行信道级联。 优选地, 预定原则包括: 对于不同队列, 优先选择高优先级队列中的数 据; 对于同一队列, 优先选择其中的在先申请处理的数据。 优选地, 业务类型包括: 2毫秒高速上行分组接入、 10毫秒高速上行分 组接入、 R99业务, 其中, 2毫秒高速上行分组接入业务的优先级高于 10毫 秒高速上行分组接入业务的优先级, 10毫秒高速上行分组接入业务的优先级 高于 R99业务的优先级。 为了实现上述目的, 居本发明的另一个方面, 提供了一种数据处理装 置。 根据本发明的数据处理装置包括: 建立模块, 用于分别为每种业务类型 建立一个队列; 设置模块, 用于为每个队列设置一个优先级; 放置模块, 用 于将用户数据放置在与用户数据的业务类型对应的队列中, 其中, 用户数据 包括一个或多个信道数据; 选择模块, 用于从每个队列中根据预定原则选择 用户数据进行处理。 优选地, 上述数据处理装置还包括: 判断模块, 用于判断用户数据是否 完整; 放置模块具体用于, 在确定用户数据完整的情况下将用户数据放置在 与用户数据的业务类型对应的队列中。 优选地, 上述判断模块包括: 第一判断子模块, 用于判断对于用户数据 是否需要进行信道合并; 第二判断子模块, 用于判断对于用户数据是否需要 进行信道级联。 通过本发明, 釆用了才艮据业务类型设置队列以及队列的优先级, 才艮据预 定原则进行数据处理, 解决了相关技术中存在的 HSUPA业务处理延时较长 的问题, 进而达到了有效地减小 HSUPA业务的处理延时, 提高系统性能的 效果。 附图说明 此处所说明的附图用来提供对本发明的进一步理解, 构成本申请的一部 分, 本发明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的 不当限定。 在附图中: 图 1是根据本发明实施例的数据处理方法的流程图; 图 2 是才艮据本发明实施例的混合业务优先级处理方法应用场景的示意 图; 图 3是才艮据本发明实施例的混合业务优先级处理方法的流程图; 图 4是根据本发明实施例的用户数据完整性判断的流程图; 图 5是根据本发明实施例的用户数据完整性判断的电路结构框图; 图 6是图 5中的 LE RAM的单元结构的示意图; 图 7是才艮据本发明实施例的混合业务处理队列的示意图; 图 8是根据本发明实施例的用户信息的表示的示意图; 图 9是根据本发明实施例的混合业务优先级判断状态机的示意图; 图 10是根据本发明实施例的数据处理装置的结构框图; 图 11是根据本发明实施例的数据处理装置具体的结构框图。 具体实施方式 本发明实施例提供了一种数据处理方案, 用于处理 HSUPA 业务和 /或 R99业务, 该方案适用但不限于上行专用信道混合业务的解调, 该方案根据 3GPP协议对业务延时要求, 明确了不同业务处理的优先级, 并在此基础上, 考虑了信道合并、 信道级联的影响。 该方案的处理原则如下: 分别为每种业 务类型建立一个队列, 并为每个队列设置一个优先级; 居数据的业务类型 将数据放置在对应的队列中; 对于每个队列, 根据预定原则从中选择数据进 行处理。 需要说明的是, 在不冲突的情况下, 本申请中的实施例及实施例中的特 征可以相互组合。 下面将参考附图并结合实施例来详细说明本发明。 在以下实施例中, 在附图的流程图示出的步 4聚可以在诸如一组计算机可 执行指令的计算机系统中执行, 并且, 虽然在流程图中示出了逻辑顺序, 但 是在某些情况下, 可以以不同于此处的顺序执行所示出或描述的步骤。 方法实施例 根据本发明的实施例, 提供了一种数据处理方法, 图 1是根据本发明实 施例的数据处理方法的流程图, 如图 1 所示, 该方法包括如下的步骤 S 102 至步 4聚 S 106: 步骤 S 102 , 分别为每种业务类型建立一个队列 (或称为优先级处理队 列), 并为每个队列设置一个优先级。 即, 根据用户数据的业务类型, 建立优 先级处理队列。 步骤 S 104, 将用户数据放置在与用户数据的业务类型对应的队列中, 其 中, 用户数据中包括一个或多个信道数据。 步骤 S 106, 对于每个队列, 根据预定原则 (或称为判定原则)从中选择 用户数据进行处理。 即, 从优先级处理队列中选取优先级最高的用户数据, 并通知下游模块进行处理。 优选地, 预定原则可以是: 对于不同队列, 优先选择高优先级队列中的 用户数据; 对于同一队列, 优先选择其中的在先申请处理的用户数据。 在步骤 S 104之前, 需要根据用户数据的信息和 /或历史记录判断用户数 据是否完整, 并在确定用户数据完整的情况下, 将用户数据放置在与用户数 据的业务类型对应的队列中, 其中, 用户数据的信息可以是上游模块发送过 来的信息。 这些信息可以包括: 信道数据的信道号 (Channel ID ), 用于信道 合并的链路号 (LE ID ), 参与信道合并的信道数目和信道号, 信道合并中信 道处理完成标识、 用于信道级联的用户号 (SR ID ), 信道级联中单个信道前 后位置的指示、 参与信道级联的信道号。 历史记录可以是指相同链路号(LE ID ) 或者用户号 (SR ID ) 下, 其它信道完成情况。 判断数据完整需要进行首先判断对于用户数据是否需要进行信道合并, 然后, 判断对于用户数据是否需要进行信道级联。 优选地, 在判断用户数据是否完整之前, 以信道为单位处理用户数据; 在判断出用户数据是完整的之后, 以用户为单位处理用户数据。 优选地, 业务可以是: 2ms (毫秒) HSUPA业务、 lOms HSUPA业务、 R99业务, 其中, 2 ms HSUPA业务的优先级高于 lO ms HSUPA业务, 10 ms HSUPA业务的优先级高于 R99业务的优先级。 下面将结合实例对本发明实施例的实现过程进行详细描述。 图 2是才艮据本发明实施例的混合业务优先级处理应用场景的示意图, 如 图 2 所示, 整个上行专用数据信道解调主要可以分为专用信道 (Dedicated channel , 简称为 DCH ) 码片级解调和专用信道 ( Dedicated channel , 简称为 DCH )符号级解调两个部分。 这两个部分解调资源的分配有所不同。 在 DCH 码片级解调的过程中,解调资源是按信道号( Channel ID )来分配的。在 DCH 符号级解调的过程中, 是按照用户号 ( SR ID ) 来进行解调的, 一个用户号 ( SR ID ) 可能对应多个信道号 (Channel ID )。 它们的具体关系是, 一个用 户号 (SR ID ) 下面有一个或两个链路号 (LE ID ); 每个链路号 (LE ID ) 下 面有一个到三个信道号(Channel ID )。 混合业务优先级判断介于这两个部分 中间, 就要完成解调任务队列的转换, 并且按照用户业务的优先级建立新的 任务队列。 在这个过程中就需要监测上述资源的状况。 如图 2所示, 在进行混合业务优先级处理之前, 用户的数据是以若千码 片为单位, 分时进行码片级处理; 在混合业务优先级处理之后, 用户的数据 是以 TTI为单位, 分时进行符号级处理。 即, 本实施例的混合业务优先级处 理主要负责两个功能: 一个是监测用户数据码片级处理的情况, 另一个是为 用户数据符号级处理安排合适的顺序。 图 3是 居本发明实施例的混合业务优先级处理方法的流程图, 如图 3 所示, 该流程包括三个方面的内容: 方面一, 用户数据完整性判断; 方面二, 混合业务处理队列; 方面三, 混合业务优先级判断, 下面分别对这三个方面 进行描述。 方面一 用户数据完整性判断就是为了监测用户数据码片级处理的情况, 记录中 间过程, 并 ί巴结果输出给混合业务处理队列。 图 4是才艮据本发明实施例的用户数据完整性判断的流程图 ,如图 4所示, 如果有信道码片处理完成, 就要启动该判断流程。 首先要读取历史记录, 然 后, 要进入信道合并判断, 信道合并判断主要是判断需要进行信道合并的多 条信道是否全部完成码片级处理。 如果判断的结果为否 (NO ), 则保存现在 的处理信息, 并且中止用户数据完整性判断的流程。 如果判断的结果为是 ( YES ), 可以确认信道合并数据就绪。 图 5是根据本发明实施例的用户数据完整性判断的电路结构框图, 如图 5所示, 模块 LE Ctrl和模块 LE Ram用来完成信道合并判断的功能。 上游模 块通过信号 ch_rdy通知有信道码片处理完成, 并 4巴通过信号 para携带相关 参数。模块 LE Ctrl根据信号 para中的参数 LE ID, 可以从 LE Ram中读取信 道合并的历史信息。 DPDCH Delay DPCCH—The delay mode in which the frame is demodulated. In this mode, the DPCCH is normally demodulated. After demodulating one frame, all TFCI symbol data is collected and decoded to obtain the actual SF, and then DPDCH demodulation is started. At this time, the DPDCH demodulation can be performed according to the actual SF. . In the R99 protocol, there is no strict time requirement for the dedicated physical channel processing. At the same time, the system has a minimum SF limit for different service requirements. For example, the minimum SF of the data service is 4, and the corresponding actual SF range is 4. 8 and 16. Therefore, with the above two kinds of RAKE receiver devices, the delayed demodulation mode saves hardware resources more than the simultaneous demodulation mode, because the actual SF can be used for demodulation and data storage, and the secondary integration is reduced, therefore, Most communication systems Both use delayed demodulation mode. However, for the R6 protocol of the 3rd Generation Partnership Project (3GPP) and its subsequent protocols, uplink high-speed services are added, and the reception processing of these uplink high-speed services is made in the protocol. With stricter regulations, the total processing delay in the base station system is greatly reduced. For example, the R6 protocol adds an Enhanced Dedicated Physical Channel (E-DPCH), which mainly carries a High Speed Uplink Packet Access (HSUPA) service. Processing has strict timing restrictions. According to the 3GPP regulations, the total processing delay of the HSUPA service for the 2ms Transmission Time Interval (TTI) cannot exceed 8.3ms in the base station system; the HSUPA service of the 10ms transmission time interval (TTI) is in the total base station system. The processing delay cannot exceed 24.3ms. According to this limitation, the RAKE receiver device using the delayed demodulation mode is used to process the E-DPCH, which cannot meet the system processing time limit requirement. The RAKE receiver device, which also delays the demodulation mode, cannot meet the system processing time limit requirement for the channel newly added by the R6 subsequent protocol. Therefore, in order to meet the stringent requirements of new service delays, it is necessary to use a RAKE receiver with simultaneous demodulation mode. In this case, all users simultaneously perform data demodulation. When each user completes a TTI data demodulation and obtains the actual SF, secondary despreading is performed. The second despreading is processed according to the user. Each time a user needs to process a TTI data amount, when the amount of data is large, the processing time will be longer. If a large amount of time is occupied by the R99 service, the HSUPA service will be in a waiting state, which is not conducive to shortening the delay of the HSUPA service. According to the requirements of the new agreement, it is hoped that the processing delay of the HSUPA service will be shortened as much as possible. The above two RAKE receivers cannot perform the optimization of performance in this aspect. The above scheme must be optimized to improve the performance of the HSUPA service processing delay. SUMMARY OF THE INVENTION The present invention has been made in view of the problem that the HSUPA service processing delay is long in the related art, and it is a primary object of the present invention to provide an improved data processing scheme to solve at least one of the above problems. In order to achieve the above object, according to an aspect of the present invention, a data processing method is provided. A data processing method according to the present invention includes: establishing a queue for each service type separately, and setting a priority for each queue; placing user data in a service class with user data In the corresponding queue, where the user data includes one or more channel data; for each queue, user data is selected for processing according to a predetermined principle. Preferably, before the user data is placed in the queue corresponding to the service type of the user data, the method further includes: determining whether the user data is complete, and placing the user data in the user data when determining that the user data is complete. The type of business corresponds to the queue. Preferably, the method further includes: processing the user data in units of channels before determining whether the user data is complete; and processing the user data in units of users after determining that the user data is complete. Preferably, determining whether the user data is complete comprises: determining whether the user data is complete according to information and/or history records of the user data, wherein the information of the user data includes at least one of the following: a channel number of the channel data, a chain used for channel combining The road number, the number of channels and channel numbers participating in channel combining, the channel processing completion flag in channel combining, the user number used for channel cascading, the indication of the position of the single channel before and after the channel cascading, and the channel number of the participating channel cascading. Preferably, determining whether the user data is complete further comprises: determining whether channel combining is required for the user data; determining whether channel cascading is required for the user data. Preferably, the predetermined principle comprises: preferentially selecting data in the high priority queue for different queues; and preferentially selecting data processed by the prior application among the same queue. Preferably, the service type includes: 2 millisecond high speed uplink packet access, 10 millisecond high speed uplink packet access, and R99 service, wherein the priority of the 2 millisecond high speed uplink packet access service is higher than the 10 millisecond high speed uplink packet access service. Priority, the priority of the 10 millisecond high-speed uplink packet access service is higher than the priority of the R99 service. In order to achieve the above object, in another aspect of the invention, a data processing apparatus is provided. The data processing apparatus according to the present invention comprises: an establishing module for respectively establishing a queue for each type of service; a setting module for setting a priority for each queue; and a placing module for placing user data with the user The queue corresponding to the service type of the data, wherein the user data includes one or more channel data; and the selecting module is configured to select user data from each queue according to a predetermined principle for processing. Preferably, the data processing device further includes: a determining module, configured to determine whether the user data is complete; and the placing module is configured to: in the case that the user data is complete, the user data is placed in a queue corresponding to the service type of the user data. . Preferably, the determining module includes: a first determining sub-module, configured to determine whether channel combining is required for user data; and a second determining sub-module configured to determine whether channel cascading is required for user data. Through the invention, the priority of the queue and the queue is set according to the service type, and the data processing is performed according to the predetermined principle, thereby solving the problem that the HSUPA service processing delay in the related technology is long, thereby achieving the effective Reduce the processing delay of the HSUPA service and improve the system performance. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are set to illustrate,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, In the drawings: FIG. 1 is a flowchart of a data processing method according to an embodiment of the present invention; FIG. 2 is a schematic diagram of an application scenario of a hybrid service priority processing method according to an embodiment of the present invention; FIG. 4 is a flowchart of user data integrity judgment according to an embodiment of the present invention; FIG. 5 is a circuit block diagram of user data integrity judgment according to an embodiment of the present invention; 6 is a schematic diagram of a unit structure of the LE RAM of FIG. 5; FIG. 7 is a schematic diagram of a hybrid service processing queue according to an embodiment of the present invention; FIG. 8 is a schematic diagram of representation of user information according to an embodiment of the present invention; 9 is a schematic diagram of a hybrid service priority determination state machine according to an embodiment of the present invention; FIG. 10 is a block diagram showing a structure of a data processing apparatus according to an embodiment of the present invention; 11 is a block diagram showing the detailed structure of a data processing apparatus according to an embodiment of the present invention. The embodiments of the present invention provide a data processing solution for processing an HSUPA service and/or an R99 service, where the solution is applicable to, but not limited to, demodulation of an uplink dedicated channel hybrid service, and the solution delays the service according to the 3GPP protocol. The requirements, the priorities of different service processing are clarified, and on this basis, the effects of channel merging and channel cascading are considered. The principle of the scheme is as follows: Create a queue for each service type and set a priority for each queue; the service type of the data places the data in the corresponding queue; for each queue, according to the predetermined principle Select the data for processing. It should be noted that the embodiments in the present application and the features in the embodiments may be combined with each other without conflict. The invention will be described in detail below with reference to the drawings in conjunction with the embodiments. In the following embodiments, the steps shown in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer executable instructions, and although the logical order is shown in the flowchart, in some In this case, the steps shown or described may be performed in a different order than the ones described herein. Method Embodiments According to an embodiment of the present invention, a data processing method is provided. FIG. 1 is a flowchart of a data processing method according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps S102 to 4 S S 106: Step S 102, respectively establish a queue (or called a priority processing queue) for each service type, and set a priority for each queue. That is, a priority processing queue is established according to the service type of the user data. Step S104: The user data is placed in a queue corresponding to the service type of the user data, where the user data includes one or more channel data. Step S106, for each queue, user data is selected therefrom for processing according to a predetermined principle (or a decision principle). That is, the user data with the highest priority is selected from the priority processing queue, and the downstream module is notified for processing. Preferably, the predetermined principle may be: preferentially selecting user data in the high priority queue for different queues; and preferentially selecting user data processed by the prior application for the same queue. Before step S104, it is necessary to determine whether the user data is complete according to the information and/or history of the user data, and in the case of determining that the user data is complete, the user data is placed in a queue corresponding to the service type of the user data, wherein The information of the user data may be information sent by the upstream module. The information may include: a channel number of the channel data, a link number (LE ID) for channel combining, a number of channels and a channel number participating in the channel combining, a channel processing completion identifier in the channel combining, and a channel level. The associated user number (SR ID ), the indication of the position of the single channel before and after the channel cascade, and the channel number of the channel cascading. The history can refer to the completion of other channels under the same link number (LE ID) or user number (SR ID). Judging the integrity of the data requires first determining whether channel merging is required for the user data, and then determining whether channel cascading is required for the user data. Preferably, the user data is processed in units of channels before determining whether the user data is complete; after determining that the user data is complete, the user data is processed in units of users. Preferably, the service may be: 2ms (milliseconds) HSUPA service, lOms HSUPA service, R99 service, where 2 ms HSUPA service has priority over lO ms HSUPA service, and 10 ms HSUPA service has priority over R99 service priority level. The implementation process of the embodiment of the present invention will be described in detail below with reference to examples. 2 is a schematic diagram of a hybrid service priority processing application scenario according to an embodiment of the present invention. As shown in FIG. 2, the entire uplink dedicated data channel demodulation can be mainly divided into a dedicated channel (Dedicated channel, referred to as DCH) chip. Level demodulation and Dedicated channel (referred to as DCH) symbol level demodulation two parts. The allocation of these two partial demodulation resources is different. In the DCH chip-level demodulation process, the demodulation resources are allocated by channel number (Channel ID). In the DCH symbol level demodulation process, it is demodulated according to the user number (SR ID), and a user number (SR ID) may correspond to multiple channel numbers (Channel ID). Their specific relationship is that there is one or two link numbers (LE IDs) under one user number (SR ID); there is one to three channel numbers (Channel IDs) under each link number (LE ID). The hybrid service priority judgment is between the two parts, and the conversion of the demodulation task queue is completed, and a new task queue is established according to the priority of the user service. In this process, it is necessary to monitor the status of the above resources. As shown in FIG. 2, before performing the hybrid service priority processing, the user's data is processed in a time-sharing manner in units of thousands of chips; after the mixed service priority processing, the user's data is based on TTI. Units, symbol-level processing in time division. That is, the mixed service priority of this embodiment It is mainly responsible for two functions: one is to monitor the processing of user data at the chip level, and the other is to arrange the appropriate order for the user data symbol level processing. FIG. 3 is a flowchart of a hybrid service priority processing method according to an embodiment of the present invention. As shown in FIG. 3, the process includes three aspects: Aspect 1, user data integrity judgment; and aspect 2, hybrid service processing queue Aspect 3: Hybrid business priority judgment, the following three aspects are described separately. Aspect-user data integrity judgment is to monitor the user data chip-level processing, record the intermediate process, and output the result to the hybrid service processing queue. 4 is a flow chart of user data integrity determination according to an embodiment of the present invention. As shown in FIG. 4, if channel chip processing is completed, the determination process is started. First, the history record is read. Then, to enter the channel merge judgment, the channel merge judgment mainly determines whether the plurality of channels that need to be channel merged are all completed at the chip level. If the result of the determination is NO (NO), the current processing information is saved, and the flow of the user data integrity judgment is suspended. If the result of the judgment is yes (YES), it can be confirmed that the channel merge data is ready. FIG. 5 is a block diagram showing the circuit structure of the user data integrity judgment according to the embodiment of the present invention. As shown in FIG. 5, the module LE Ctrl and the module LE Ram are used to complete the channel combining judgment function. The upstream module notifies that the channel chip processing is completed by the signal ch_rdy, and the 4 bar carries the relevant parameters through the signal para. The module LE Ctrl can read the channel merge history information from the LE Ram according to the parameter LE ID in the signal para.
LE Ram是以 LE ID为索引, 图 6是图 5中的 LE Ram的单元结构的示 意图,如图 6所示, num表示需要合并的信道个数; LE ID表示链路号; Channel ID (信道号) 0、 Channel ID 1和 Channel ID 2表示需要合并的信道号; flag (标志位) 0、 flagl、 flag2用来记录对应信道上游码片级解调是否完成。 以 上这些信息都可以从信号 para中获取。 然后, 在模块 LE Ctrl 中, 结合当前的信道信息和历史信息, 判断参与 信道合并的信道是否都已完成码片级处理。 其判断的依据就是 num与 flag0、 flagl、 flag2之间的关系。 如果 num是 1 , 就不用进行信道合并; 假设 num 是 2, 那么 flag0、 flagl、 flag2中必须有两个的值为 1 , 才能判定上游模块把 信道合并的数据已经处理完毕; 当 num是 3 ,那么所有的标志位 flag0、 flagl、 flag2必须全部为 1。 如果判断结果为是, 把 num与 flag0、 flagl、 flag2全部 清零, 写入 LE Ram中, 即, 清空历史信息; 如果判断结果为否, 则要把当 前的 Channel ID和 flag信息添加在相应的位置, 并 4巴更新后的信息写入 LELE Ram is indexed by LE ID, and FIG. 6 is a schematic diagram of the unit structure of LE Ram in FIG. 5. As shown in FIG. 6, num indicates the number of channels to be merged; LE ID indicates the link number; Channel ID (channel) No.) 0, Channel ID 1 and Channel ID 2 indicate the channel number to be merged; flag (flag) 0, flag1, flag2 are used to record whether the upstream chip-level demodulation of the corresponding channel is completed. All of the above information can be obtained from the signal para. Then, in the module LE Ctrl, combined with the current channel information and historical information, it is determined whether the channels participating in the channel combining have completed the chip-level processing. The basis for its judgment is the relationship between num and flag0, flagl, and flag2. If num is 1, no channel merging is required; assuming num is 2, then two values of flag0, flagl, and flag2 must be 1 to determine that the upstream module has processed the channel merged data; when num is 3, Then all flag bits flag0, flagl, Flag2 must all be 1. If the result of the determination is yes, num and flag0, flagl, and flag2 are all cleared, and written into LE Ram, that is, the history information is cleared; if the judgment result is no, the current Channel ID and flag information are added to the corresponding Location, and 4 bar updated information is written to LE
Ram, 作为历史信息可以下次查询。 接下来, 需要进行信道级联的判断。 如图 4所示, 步 4聚 S402, 信道码片处理完成。 步 4聚 S404, 读取历史 ΐ己录。 步 4聚 S406, 信道合并判断。 步骤 S408, 保存记录, 终止流程。 步骤 S410, 如果信道合并数据全部就绪。 步 4聚 S412, 读取历史 ΐ己录。 步骤 S414, 信道级联判断。 该信道级联判断过程主要是判断需要进行信 道级联的多条信道是否全部完成码片级处理,并且确定这些信道的先后顺序, 如果判断结果为否,进入到步 4聚 S416,如果判断结果为是,进入到步 4聚 S418。 步骤 S416,保存现在的处理信息,并且中止用户数据完整性判断的流程。 步 4聚 S418, 可以确认用户数据是完整的。 如图 5所示,模块 SR Ctrl和模块 SR Ram就是用来完成信道级联判断的 功能。 其工作原理和实现方法与模块 LE Ctrl、 模块 LE Ram类似, 在此不再 赘述。 方面二 混合业务处理队列就是 4巴码片处理的结果按照业务分类进行排列, 并启 动混合业务优先级判断。 当用户数据是完整性判断结束, 就要根据用户数据的业务类型建立混合 业务处理队列。 业务类型主要分为 3种: 2ms HSUPA业务、 10ms HSUPA业 务和 R99业务。 图 7是才艮据本发明实施例的混合业务处理队列的示意图, 如图 7所示, 才艮据业务类型, 一共建立了 3个队列, 2ms HSUPA业务对垒、 10ms HSUPA 业务队列和 R99业务队列。每个队列中都可以存放 N个用户的信息, 只要用 户数据完成了通过了完整性判断, 就可以进入相应队列中。 队列中用户信息表示的方法有艮多中, 图 8是才艮据本发明实施例的用户 信息的表示的示意图, 如图 8所示, Channel ID (信道号) 记录了信道号, flag_LE (信道合并标识)标志了信道合并的情况, 标志该业务类型中下一个 信道要与当前信道合并; flag_SR (信道级联标识) 标志了信道级联的情况, 表示该业务类型中下一个信道要与当前信道级联。 需要考虑信道合并和信道 级联的情况, 根据标志位 flag_LE和 flag_SR来判断, 保证同一个用户的所 有信道要按照先后顺序依次处理。 方面三 混合业务优先级判断就是为用户数据符号级处理安排合适的顺序。 混合 业务处理队列建立完毕, 就可以进行混合业务优先级的判断。 图 9是才艮据本 发明实施例的混合业务优先级判断状态机的示意图, 图 9所示状态机的优先 级的判断的原则是: 1 ) 同种业务类型的数据, 先申请, 先处理。 Ram, as historical information, can be queried next time. Next, the judgment of the channel cascade is required. As shown in FIG. 4, step 4 gathers S402, and channel chip processing is completed. Step 4 gathers S404, reads the history and records it. Step 4 gathers S406, and the channel merges and judges. In step S408, the record is saved, and the process is terminated. Step S410, if the channel merge data is all ready. Step 4 gathers S412, reads the history and records it. Step S414, the channel cascade judgment. The channel cascading judging process mainly determines whether the plurality of channels that need to be channel cascaded are all completed at the chip level, and determines the sequence of the channels. If the judgment result is no, the process proceeds to step 4, S416, if the result is determined. To be yes, proceed to step 4 to gather S418. Step S416, the current processing information is saved, and the flow of the user data integrity judgment is suspended. Step 4 gathers S418 to confirm that the user data is complete. As shown in FIG. 5, the module SR Ctrl and the module SR Ram are functions for completing channel cascade judgment. The working principle and implementation method are similar to the module LE Ctrl and the module LE Ram, and are not described here. The second hybrid service processing queue is that the result of the 4 bar chip processing is arranged according to the service classification, and the hybrid service priority judgment is started. When the user data is the integrity judgment end, the hybrid service processing queue is established according to the service type of the user data. There are three main types of services: 2ms HSUPA service, 10ms HSUPA service and R99 service. FIG. 7 is a schematic diagram of a hybrid service processing queue according to an embodiment of the present invention. As shown in FIG. 7, a total of three queues, 2ms HSUPA service confrontation, 10ms HSUPA are established according to the service type. Business queues and R99 business queues. Each user can store information of N users in each queue. As long as the user data has passed the integrity judgment, it can enter the corresponding queue. There are many methods for representing user information in the queue. FIG. 8 is a schematic diagram showing the representation of user information according to an embodiment of the present invention. As shown in FIG. 8, the channel ID (channel number) records the channel number, flag_LE (channel). The merge identifier indicates the case of channel merging, indicating that the next channel in the service type is to be merged with the current channel; fl ag _SR (channel cascading identifier) indicates the channel cascading, indicating that the next channel in the service type is to be Cascading with the current channel. It is necessary to consider the case of channel merging and channel cascading, and judge according to the flag bits flag_LE and flag_SR to ensure that all channels of the same user are sequentially processed in order. Aspect 3 The hybrid service priority judgment is to arrange the appropriate order for the user data symbol level processing. After the hybrid service processing queue is established, the hybrid service priority can be judged. FIG. 9 is a schematic diagram of a hybrid service priority judging state machine according to an embodiment of the present invention. The principle of judging the priority of the state machine shown in FIG. 9 is: 1) data of the same service type, first applied, first processed .
2 )多种业务类型同时申请, 2ms HSUPA业务优先级最高, 10ms HSUPA 业务次之, R99业务优先级最氐。 2) Multiple service types are applied at the same time. The 2ms HSUPA service has the highest priority, the 10ms HSUPA service takes the second place, and the R99 service has the highest priority.
3 ) 如果某个业务类型的信道需要完成信道合并或者信道级联, 必须保 证这些过程全部连续处理完毕, 才能按照 2 ) 中的优先策略进行选择。 到此为止, 整个优先级处理的过程完毕。 通过本实施例, 减少了 HSUPA业务的处理延时, 尤其是, 对于 2ms的 HSUPA业务, 对比于协议的要求, 其处理时间有所减少。 装置实施例 根据本发明的实施例, 提供了一种数据处理装置, 图 10 是根据本发明 实施例的数据处理装置的结构框图, 如图 10 所示, 该装置包括: 建立模块 12、 设置模块 14、 放置模块 16、 选择模块 18 , 下面对该结构进行详细的说 明。 建立模块 12 , 用于分别为每种业务类型建立一个队列; 设置模块 14连 接至建立模块 12 , 用于为每个队列设置一个优先级; 放置模块 16连接至设 置模块 14 , 用于根据用户数据的业务类型将用户数据放置在与用户数据的业 务类型对应的队列中, 其中, 用户数据包括一个或多个信道数据; 选择模块 18 连接至放置模块 16 , 用于从每个队列中根据预定原则选择用户数据进行 处理。 优选地, 预定原则可以是: 对于不同队列, 优先选择高优先级队列中的 用户数据; 对于同一队列, 优先选择其中的在先申请处理的用户数据。 图 11是根据本发明实施例的数据处理装置具体的结构框图, 如图 11所 示, 该装置还包括判断模块 10 , 该判断模块 10连接至设置模块 14, 用于判 断用户数据是否完整。 放置模块 16连接至判断模块 10具体用于, 在确定用户数据完整的情况 下将数据放置在与用户数据的业务类型对应的队列中。 如图 11所示, 判断模块 10包括: 第一判断子模块 102和第二判断子模 块 104。 其中, 第一判断子模块 102 , 用于判断对于用户数据是否需要进行 信道合并; 第二判断子模块 104连接至第一判断子模块 102 , 用于判断对于 用户数据是否需要进行信道级联。 下面结合实例对该装置进行详细的说明。 判断模块 10也可以称为用户数据完整性判断模块, 建立模块 12、 设置 模块 14、 放置模块 16可以统称为混合业务处理队列模块, 选择模块 18可以 称为混合业务优先级判断模块。 下面对这三个模块分别进行说明。 用户数据完整性判断模块主要是用来判断同一个用户下需要信道合并和 信道级联的相关信道是否都已完成最大比合并 (Maximum Ratio Combining, 简称为 MRC)合并。 当某个信道完成 MRC合并, 上游的 MRC模块会立即通 知用户数据完整性判断模块, 同时把信道合并和信道级联的相关参数也传过 来。 该模块根据相关参数以及历史信息, 判断用户的数据是否处理完。 当确 定某个用户的数据已经全部完成 MRC合并, 就把用户的信息发给混合业务 处理队列模块。 混合业务处理队列模块主要是才艮据业务类型和先后顺序 ,建立处理队列。 才艮据三种业务 2ms HSUPA、 10ms HSUPA、 R99类型, 建立三条处理队列。 在该模块中, 存储的基本单元是信道信息, 用户信息是连续的多个信道信息 组成。 如果队列中有等待处理的用户信息, 就会通知混合业务优先级判断模 块。 混合业务优先级判断模块主要是根据处理队列中的情况, 按照优先级判 断, 选择要处理的信道。 最后, 把选定的信道信息发给下游的二次解扩模块。 综上, 通过本发明上述实施例可以有效地减小 HSUPA业务在基站侧的 处理延时, 同时, 上述实施例应用简单、 可靠, 能够有效地减小 HSUPA业 务的处理延时, 提高系统性能。 显然, 本领域的技术人员应该明白, 上述的本发明的各模块或各步骤可 以用通用的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布 在多个计算装置所组成的网络上, 可选地, 它们可以用计算装置可执行的程 序代码来实现, 从而, 可以将它们存储在存储装置中由计算装置来执行, 或 者将它们分别制作成各个集成电路模块, 或者将它们中的多个模块或步骤制 作成单个集成电路模块来实现。 这样, 本发明不限制于任何特定的硬件和软 件结合。 以上仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本领域 的技术人员来说, 本发明可以有各种更改和变化。 凡在本发明的 ^"神和原则 之内, 所作的任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围 之内。 3) If the channel of a certain service type needs to complete channel merging or channel cascading, it must be ensured that all of these processes are processed continuously in order to select according to the priority policy in 2). At this point, the entire priority processing process is completed. With this embodiment, the processing delay of the HSUPA service is reduced. In particular, for the 2 ms HSUPA service, the processing time is reduced compared to the requirements of the protocol. Apparatus Embodiment According to an embodiment of the present invention, a data processing apparatus is provided. FIG. 10 is a structural block diagram of a data processing apparatus according to an embodiment of the present invention. As shown in FIG. 10, the apparatus includes: an establishing module 12, a setting module 14. Place the module 16 and select the module 18. The structure will be described in detail below. The setting module 12 is configured to respectively establish a queue for each service type; the setting module 14 is connected to the establishing module 12 for setting a priority for each queue; the placing module 16 is connected to the setting module 14 for using the user data according to The service type places the user data in a queue corresponding to the service type of the user data, wherein the user data includes one or more channel data; the selection module 18 is connected to the placement module 16 for using a predetermined principle from each queue Select user data for processing. Preferably, the predetermined principle may be: preferentially selecting user data in the high priority queue for different queues; and preferentially selecting user data processed by the prior application for the same queue. FIG. 11 is a block diagram showing a specific structure of a data processing apparatus according to an embodiment of the present invention. As shown in FIG. 11, the apparatus further includes a determining module 10, and the determining module 10 is connected to the setting module 14 for determining whether the user data is complete. The placement module 16 is connected to the determination module 10 for specifically placing the data in a queue corresponding to the service type of the user data in the case where it is determined that the user data is complete. As shown in FIG. 11, the determining module 10 includes: a first determining sub-module 102 and a second determining sub-module 104. The first determining sub-module 102 is configured to determine whether channel combining is required for user data. The second determining sub-module 104 is connected to the first determining sub-module 102 for determining whether channel cascading is required for user data. The device will be described in detail below with reference to examples. The judging module 10 may also be referred to as a user data integrity judging module. The building module 12, the setting module 14, and the placing module 16 may be collectively referred to as a hybrid service processing queue module, and the selecting module 18 may be referred to as a hybrid service priority judging module. The three modules are described separately below. The user data integrity judging module is mainly used to determine whether the relevant channel that needs channel combining and channel cascading under the same user has completed Maximum Ratio Combining (MRC) merging. When a channel completes the MRC merge, the upstream MRC module immediately notifies the user of the data integrity judgment module, and also transmits the relevant parameters of channel merging and channel cascading. The module determines whether the user's data is processed based on relevant parameters and historical information. When it is determined that the data of a certain user has completely completed the MRC merge, the information of the user is sent to the hybrid service processing queue module. The hybrid service processing queue module mainly establishes a processing queue according to the service type and the sequence. According to the three services 2ms HSUPA, 10ms HSUPA, R99 type, three processing queues are established. In this module, the basic unit of storage is channel information, and the user information is composed of a plurality of consecutive channel information. If there is user information waiting to be processed in the queue, the mixed service priority judgment module is notified. The hybrid service priority judging module mainly selects the channel to be processed according to the priority in the processing queue according to the priority judgment. Finally, the selected channel information is sent to the downstream secondary despreading module. In summary, the foregoing embodiment of the present invention can effectively reduce the processing delay of the HSUPA service on the base station side. At the same time, the foregoing embodiment is simple and reliable, and can effectively reduce the processing delay of the HSUPA service and improve system performance. Obviously, those skilled in the art should understand that the above modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device, or they may be separately fabricated into individual integrated circuit modules, or they may be Multiple modules or steps are made into a single integrated circuit module. Thus, the invention is not limited to any specific combination of hardware and software. The above are only the preferred embodiments of the present invention, and are not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the scope of the present invention are intended to be included within the scope of the present invention.

Claims

权 利 要 求 书 Claim
1. 一种数据处理方法, 其特征在于, 包括: A data processing method, comprising:
分别为每种业务类型建立一个队列,并为每个队列设置一个优先级; 将用户数据放置在与所述用户数据的业务类型对应的队列中,其中, 所述用户数据包括一个或多个信道数据;  Establishing a queue for each service type and setting a priority for each queue; placing user data in a queue corresponding to the service type of the user data, where the user data includes one or more channels Data
对于每个队列, 根据预定原则从中选择用户数据进行处理。  For each queue, user data is selected for processing according to a predetermined principle.
2. 根据权利要求 1所述的方法, 其特征在于, 在将用户数据放置在与所述 用户数据的业务类型对应的队列中之前, 还包括: The method according to claim 1, wherein before the user data is placed in the queue corresponding to the service type of the user data, the method further includes:
判断所述用户数据是否完整,并在确定所述用户数据完整的情况下, 将所述用户数据放置在与所述用户数据的业务类型对应的队列中。  Determining whether the user data is complete, and in the case of determining that the user data is complete, placing the user data in a queue corresponding to a service type of the user data.
3. 根据权利要求 2所述的方法, 其特征在于, 还包括: 3. The method according to claim 2, further comprising:
在判断所述用户数据是否完整之前, 以信道为单位处理所述用户数 据;  Processing the user data in units of channels before determining whether the user data is complete;
在判断出所述用户数据是完整的之后, 以用户为单位处理所述用户 数据。  After judging that the user data is complete, the user data is processed in units of users.
4. 居权利要求 2所述的方法, 其特征在于, 判断所述用户数据是否完整 包括: 4. The method of claim 2, wherein determining whether the user data is complete comprises:
根据所述用户数据的信息和 /或历史记录判断所述用户数据是否完 整, 其中, 所述用户数据的信息至少包括以下之一: 所述信道数据的信 道号、 用于信道合并的链路号、 参与信道合并的信道数目和信道号、 信 道合并中信道处理完成标志、 用于信道级联的用户号、 信道级联中单个 信道前后位置的指示、 参与信道级联的信道号。  Determining whether the user data is complete according to the information and/or the history of the user data, where the information of the user data includes at least one of the following: a channel number of the channel data, and a link number used for channel combining. The number of channels and channel numbers participating in channel combining, the channel processing completion flag in channel combining, the user number used for channel cascading, the indication of the position of the channel before and after the channel cascading, and the channel number of the channel cascading.
5. 居权利要求 4所述的方法, 其特征在于, 判断所述用户数据是否完整 还包括: 5. The method of claim 4, wherein determining whether the user data is complete further comprises:
判断对于所述用户数据是否需要进行信道合并;  Determining whether channel merging is required for the user data;
判断对于所述用户数据是否需要进行信道级联。 根据权利要求 1至 5中任一项所述的方法, 其特征在于, 所述预定原则 包括: A determination is made as to whether channel cascading is required for the user data. The method according to any one of claims 1 to 5, wherein the predetermined principle comprises:
对于不同队列, 优先选择高优先级队列中的数据;  For different queues, the data in the high priority queue is preferentially selected;
对于同一队列, 优先选择其中的在先申请处理的数据。 根据权利要求 1至 5中任一项所述的方法, 其特征在于, 所述业务类型 包括:  For the same queue, the data processed by the previous application is preferred. The method according to any one of claims 1 to 5, wherein the service type comprises:
2毫秒高速上行分组接入、 10毫秒高速上行分组接入、 R99业务, 其中,所述 2毫秒高速上行分组接入业务的优先级高于所述 10毫秒高速 上行分组接入业务的优先级,所述 10毫秒高速上行分组接入业务的优先 级高于所述 R99业务的优先级。 一种数据处理装置, 其特征在于, 包括:  2 milliseconds high speed uplink packet access, 10 millisecond high speed uplink packet access, and R99 service, wherein the priority of the 2 millisecond high speed uplink packet access service is higher than the priority of the 10 millisecond high speed uplink packet access service, The priority of the 10 millisecond high speed uplink packet access service is higher than the priority of the R99 service. A data processing device, comprising:
建立模块, 用于分别为每种业务类型建立一个队列;  Establishing a module for establishing a queue for each service type separately;
设置模块, 用于为每个队列设置一个优先级;  a setting module for setting a priority for each queue;
放置模块, 用于将用户数据放置在与所述用户数据的业务类型对应 的队列中, 其中, 所述用户数据包括一个或多个信道数据;  a placement module, configured to place user data in a queue corresponding to a service type of the user data, where the user data includes one or more channel data;
选择模块,用于从每个队列中根据预定原则选择用户数据进行处理。 根据权利要求 8所述的装置, 其特征在于, 还包括:  A selection module is used to select user data for processing from each queue according to a predetermined principle. The device according to claim 8, further comprising:
判断模块, 用于判断所述用户数据是否完整;  a determining module, configured to determine whether the user data is complete;
所述放置模块具体用于, 在确定所述用户数据完整的情况下将所述 用户数据放置在与所述用户数据的业务类型对应的队列中。 根据权利要求 9所述的装置, 其特征在于, 所述判断模块包括: 第一判断子模块, 用于判断对于所述用户数据是否需要进行信道合 并;  The placement module is specifically configured to: in the case of determining that the user data is complete, place the user data in a queue corresponding to a service type of the user data. The device according to claim 9, wherein the determining module comprises: a first determining submodule, configured to determine whether channel combining is required for the user data;
第二判断子模块, 用于判断对于所述用户数据是否需要进行信道级 联。  The second determining submodule is configured to determine whether channel cascading is required for the user data.
PCT/CN2010/073670 2009-08-06 2010-06-08 Data processing method and device WO2011015080A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2009101640502A CN101990252A (en) 2009-08-06 2009-08-06 Data processing method and device
CN200910164050.2 2009-08-06

Publications (1)

Publication Number Publication Date
WO2011015080A1 true WO2011015080A1 (en) 2011-02-10

Family

ID=43543906

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/073670 WO2011015080A1 (en) 2009-08-06 2010-06-08 Data processing method and device

Country Status (2)

Country Link
CN (1) CN101990252A (en)
WO (1) WO2011015080A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8219451B2 (en) 2001-07-13 2012-07-10 Siemens Aktiengesellschaft System and method for electronic delivery of content for industrial automation systems
US8768716B2 (en) 2001-07-13 2014-07-01 Siemens Aktiengesellschaft Database system and method for industrial automation services
CN104579962A (en) * 2015-01-23 2015-04-29 盛科网络(苏州)有限公司 Method and device for differentiating QoS strategies of different messages
KR20160034391A (en) 2013-07-24 2016-03-29 닛산 가가쿠 고교 가부시키 가이샤 Liquid crystal aligning agent and liquid crystal aligning film using same
KR20160077083A (en) 2013-10-23 2016-07-01 닛산 가가쿠 고교 가부시키 가이샤 Liquid crystal aligning agent, liquid crystal alignment film and liquid crystal display element
KR20170066495A (en) 2014-10-03 2017-06-14 닛산 가가쿠 고교 가부시키 가이샤 Liquid crystal aligning agent, liquid crystal alignment film and liquid crystal display element using same

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630797B (en) * 2014-10-29 2019-02-26 阿里巴巴集团控股有限公司 Data processing method and system
CN105897837A (en) * 2015-12-07 2016-08-24 乐视云计算有限公司 Content distribution task submitting method and system
CN107305473B (en) * 2016-04-21 2019-11-12 华为技术有限公司 A kind of dispatching method and device of I/O request
CN111258759B (en) * 2020-01-13 2023-05-16 北京百度网讯科技有限公司 Resource allocation method and device and electronic equipment
CN114900523A (en) * 2022-05-09 2022-08-12 重庆标能瑞源储能技术研究院有限公司 Directional load balancing data flow processing method under Internet of things architecture

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852581A (en) * 2005-10-12 2006-10-25 上海华为技术有限公司 Method for transmitting data on downward link
CN101267443A (en) * 2008-05-09 2008-09-17 北京天碁科技有限公司 A data processing method and communication device
CN101286949A (en) * 2008-06-06 2008-10-15 北京交通大学 Wireless Mesh network MAC layer resource scheduling policy based on IEEE802.16d standard

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852581A (en) * 2005-10-12 2006-10-25 上海华为技术有限公司 Method for transmitting data on downward link
CN101267443A (en) * 2008-05-09 2008-09-17 北京天碁科技有限公司 A data processing method and communication device
CN101286949A (en) * 2008-06-06 2008-10-15 北京交通大学 Wireless Mesh network MAC layer resource scheduling policy based on IEEE802.16d standard

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8219451B2 (en) 2001-07-13 2012-07-10 Siemens Aktiengesellschaft System and method for electronic delivery of content for industrial automation systems
US8768716B2 (en) 2001-07-13 2014-07-01 Siemens Aktiengesellschaft Database system and method for industrial automation services
KR20160034391A (en) 2013-07-24 2016-03-29 닛산 가가쿠 고교 가부시키 가이샤 Liquid crystal aligning agent and liquid crystal aligning film using same
KR20160077083A (en) 2013-10-23 2016-07-01 닛산 가가쿠 고교 가부시키 가이샤 Liquid crystal aligning agent, liquid crystal alignment film and liquid crystal display element
KR20170066495A (en) 2014-10-03 2017-06-14 닛산 가가쿠 고교 가부시키 가이샤 Liquid crystal aligning agent, liquid crystal alignment film and liquid crystal display element using same
CN104579962A (en) * 2015-01-23 2015-04-29 盛科网络(苏州)有限公司 Method and device for differentiating QoS strategies of different messages

Also Published As

Publication number Publication date
CN101990252A (en) 2011-03-23

Similar Documents

Publication Publication Date Title
WO2011015080A1 (en) Data processing method and device
JP3471813B2 (en) Random access in mobile communication systems
US7058035B2 (en) Communication system employing multiple handoff criteria
US7359345B2 (en) Signaling method between MAC entities in a packet communication system
JP4319039B2 (en) Method and apparatus for scheduling reverse channel additional channels
RU2388162C2 (en) Fixed hs-dsch or e-dch allocation for voice over ip transmission (or hs-dsch without hs-scch/e-dch without e-dpcch)
CN100547959C (en) Mobile communication system and re-transmission controlling method
US20050201319A1 (en) Method for transmission of ACK/NACK for uplink enhancement in a TDD mobile communication system
US8948070B2 (en) Scheduling method and system for high speed uplink packet access
CN103201977A (en) System and method for multi-point HSDPA communication utilizing a multi-link PDCP sublayer
US7151934B2 (en) Radio data communications method, server, and radio network controller
RU2482611C2 (en) Method and device for controlling transmission resources in automatic repeat request processes
JP4117271B2 (en) Time scheduling using SAWARQ process
US7685492B2 (en) Method, arrangement, node and mobile unit for improved transmission between two units of a telecommunication system
TW200929914A (en) Methods and apparatuses for controlling data flow
US7552257B2 (en) Data transmission device with a data transmission channel for the transmission of data between data processing devices
CN100442913C (en) Method for paging user's device
WO2017088551A1 (en) Data scheduling method and apparatus for dedicated physical data channel
JP2020519178A (en) Method and apparatus for determining if data is corrupted
CN110612683B (en) Cooperative receiving method of uplink data and network equipment
WO2003015323A1 (en) A dispatching method of packet data based on the capacity of mobile station
JP2019110403A (en) Radio base station system and communication method
CN103634847B (en) The data balancing method and system shunted between the base station of HSDPA multi-stream
WO2011076006A1 (en) Method and device for decoding processing
US20060203764A1 (en) Delay-based cell portion selection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10805984

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10805984

Country of ref document: EP

Kind code of ref document: A1