WO2011127716A1 - 多线程处理方法和装置 - Google Patents

多线程处理方法和装置 Download PDF

Info

Publication number
WO2011127716A1
WO2011127716A1 PCT/CN2010/076854 CN2010076854W WO2011127716A1 WO 2011127716 A1 WO2011127716 A1 WO 2011127716A1 CN 2010076854 W CN2010076854 W CN 2010076854W WO 2011127716 A1 WO2011127716 A1 WO 2011127716A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
processing
request message
module
resource block
Prior art date
Application number
PCT/CN2010/076854
Other languages
English (en)
French (fr)
Inventor
李云
吴丽梅
欧阳新志
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2011127716A1 publication Critical patent/WO2011127716A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request

Definitions

  • the present invention relates to thread processing techniques, and in particular, to a multi-thread processing method and apparatus. Background technique
  • locks are used if you need to operate on the same resource.
  • the so-called "lock” is to let the thread occupy space on its own, and does not allow other threads to occupy the resources they are using before their own thread processing is completed.
  • the use of a lock is for a thread to occupy a resource when it operates a certain transaction, and lock the resources it occupies. Before it is unlocked, other threads can only wait and cannot preempt.
  • the inventor of the present invention finds that the number of times the lock is used is too frequent, which affects the performance of the system; when the number of concurrent requests is large, the probability that one request waits for another request to release resources is greatly increased. The request response time will increase, causing some requests to time out.
  • One of the objects of the present invention is to provide a multi-thread processing method and apparatus that can improve system performance and avoid an increase in response time due to locking caused by a lock application.
  • the invention provides a multi-thread processing method, comprising:
  • the distribution thread acquires the request message
  • the distribution thread distributes the request message to a processing thread corresponding to the resource block. Further, the distributing thread distributing the request message to the thread corresponding to the resource block includes: the distribution thread searching for a processing thread to which the corresponding identifier bit is allocated according to the identifier bit preset by the resource block;
  • processing thread for distributing the request message to the corresponding identifier bit is specifically: sending the parameter to the processing thread of the corresponding identifier bit according to the parameter included in the request message.
  • processing thread that sends the request message to the resource block corresponding to the distribution thread includes:
  • the distribution thread distributes the related information following the request message to the corresponding processing thread.
  • the processing thread includes: after the processing thread completes the service processing, returns a response message to the request sender.
  • a multi-thread processing device includes:
  • An obtaining module configured to obtain a request message
  • a selection module configured to determine, according to parameters carried in the request message, a preset resource block that needs to be accessed
  • the distribution module configured to distribute the request message to a processing thread corresponding to the resource block. Further, the distribution module includes:
  • a searching module configured to search for a corresponding identifier bit according to the identifier bit preset by the resource block Processing thread
  • a sending module configured to distribute the request message to a processing thread corresponding to the identifier bit.
  • the sending module is specifically configured to send the parameter to the processing thread of the corresponding identifier bit according to the parameter included in the request message.
  • the distribution module is further configured to distribute the related information following request message of the request message sender to the corresponding processing thread.
  • the device further includes:
  • a service processing module for processing a specific business related process
  • the response module after completing the business process, returns a response message to the request sender.
  • the resource is allocated as a resource block corresponding to the thread, and the request is distributed to the thread corresponding to the resource block by the distribution thread for processing, thereby avoiding the use of the resource lock and the deadlock, thereby improving the response speed.
  • FIG. 1 is a flowchart of an embodiment of a multi-thread processing method according to the present invention
  • FIG. 2 is a schematic structural diagram of an embodiment of a multi-thread processing method according to the present invention.
  • FIG. 3 is a schematic structural diagram of an embodiment of a multi-thread processing apparatus according to the present invention. detailed description
  • FIG. 1 is a flowchart of an embodiment of a multi-thread processing method according to the present invention.
  • the resource is divided into multiple blocks according to the characteristics of the shared resource, and one processing thread is set for each resource, and each thread can only access the corresponding resource block allocated to it.
  • a distribution thread is pre-set to distribute all requests from outside. The distribution thread determines the resource block to be accessed according to the parameters carried by the request (such as the user's mobile phone number, IP address, etc.), and then The request is distributed to the thread corresponding to the resource block for processing.
  • the architecture diagram is shown in Figure 2.
  • the specific implementation steps are described by taking an example of requesting to carry a mobile phone number.
  • the resources are divided according to the parameters (ie mobile phone number) carried in the request. Because the user information needs to occupy the system memory, according to the tail number 0 to 9 of the mobile phone number, the memory that the system needs to use is divided into 10 blocks, and the numbers are 0-9, and one thread is set for each memory, so that the entire system exists.
  • Distribution thread and 10 processing threads The distribution thread obtains the specific parameter mobile number from the request, and locates the processing thread that needs to be forwarded according to the parameter. After receiving the request from the distribution thread, the processing thread accesses the respective resource block to complete the business processing.
  • the specific steps are as follows:
  • Step S101 A distribution thread acquires a request message.
  • a request message containing the mobile number is sent.
  • Step S102 The distribution thread determines a resource block that the request message needs to access
  • the distribution thread determines the preset resource block to be accessed according to the parameters carried in the request message.
  • the distribution thread parses the parameters in the request message to obtain a specific parameter, such as a mobile phone number, and calculates which service thread needs to be distributed according to the parameter. For example, if the last digit of the mobile phone number is 3, it should be sent to thread No. 3.
  • Step S103 The distribution thread distributes the request message to the processing thread corresponding to the resource block.
  • the distribution thread After the distribution thread determines the resource block that the request message needs to access, it searches for the processing thread corresponding to the resource block, and then assembles the distribution message, that is, adds some information, such as the sender's IP address and port, to the sender based on the original request message. And then forwarded to the corresponding processing thread No. 3.
  • the resource block and the thread are all pre-assigned with the identifier bit.
  • the ID is used as the identifier bit, and the resource block and the corresponding thread-assigned ID are the same. Therefore, the distribution thread can find the processing thread to which the corresponding ID is assigned according to the preset ID of the resource block that the request message needs to access; and then distribute the request message to the processing thread of the corresponding ID.
  • the distribution thread is responsible for distribution, does not do any business processing, and needs to obtain the necessary information from the request message. Parameters such as the user's mobile phone number, IP address, etc., which are used to locate the resource that the request needs to access, for example, according to the last digit of the mobile phone number, the request is distributed to the corresponding processing thread, and the mobile phone number is 0.
  • the processing thread is processed to No. 0, and so on.
  • the tail number is 9 and is distributed to the processing thread No. 9.
  • the distribution thread can distribute the request message to the corresponding processing thread because the resource block and the processing thread are in a corresponding relationship.
  • the distribution thread needs to distribute the relevant information of the sender of the request message (such as the sender's IP address and port, etc.) along with the request message to the corresponding processing thread, and does not cache any information of the request message locally.
  • the distribution thread distributes the related information following the request message to the corresponding processing thread, so that the processing thread sends the response message according to the relevant information of the sender after the processing thread is processed.
  • the multi-thread processing method of the present invention further includes: Step S104: After the processing thread completes the service processing, returns a response message to the request sender.
  • the processing thread performs relevant business processing after receiving the request, and then directly returns a response message to the request message sender, and is no longer returned by the distribution thread.
  • the processing thread can only access the resource blocks belonging to itself, and each processing thread is completely independent, has no interaction, and each resource block is independent of each other and has no association. Since each processing thread accesses an exclusive resource block that is allocated to itself, there is no mutual exclusion, so no lock is needed.
  • the resource is allocated as a resource block corresponding to the thread, and the request is distributed to the thread corresponding to the resource block by the distribution thread for processing, thereby avoiding resource lockup and improving the response speed.
  • FIG. 3 is a schematic structural diagram of an embodiment of a multi-thread processing apparatus according to the present invention.
  • An embodiment of the present invention provides a multi-thread processing apparatus, including:
  • An obtaining module 31 configured to obtain a request message
  • the selecting module 32 is configured to determine, according to parameters carried in the request message, a preset resource that needs to be accessed.
  • Source block
  • the selection module 32 determines a preset resource block to be accessed according to the parameter carried in the request message.
  • the selection module 32 parses the parameters in the request message, obtains a specific parameter, such as a mobile phone number, and calculates which service thread needs to be distributed according to the parameter. If the last digit of the mobile phone number is 3, it should be sent to thread No. 3.
  • the distribution module 33 is configured to distribute the request message to a processing thread corresponding to the resource block.
  • the processing thread corresponding to the resource block is searched, and then the distribution message is assembled, that is, the information is added on the basis of the original request message, such as the information about the sender, such as the sender's IP and the sender.
  • the port is then forwarded to the corresponding processing thread No. 3.
  • the distribution module 33 includes:
  • the searching module 331 is configured to search, according to the preset ID of the resource block, a processing thread to which the corresponding ID is assigned;
  • the resource block and the thread are all assigned an ID in advance, and the ID of the resource block and its corresponding thread are the same. Therefore, the searching module 331 can search for the processing thread to which the corresponding ID is assigned according to the preset ID of the resource block that the request message needs to access.
  • the sending module 332 is configured to distribute the request message to a processing thread of the corresponding ID.
  • the sending module 332 is specifically configured to send a parameter to the processing thread of the corresponding ID according to the parameter included in the request message.
  • the distribution module 33 is further configured to distribute the related information following request message of the request message sender to the corresponding processing thread.
  • the device further includes:
  • the service processing module 34 is configured to perform related business processing.
  • the response module 35 is configured to return a response message to the request sender after completing the service processing. After the processing thread receives the request, the service processing module 34 performs related business processing, and then The response module 35 returns a response message directly to the request message sender, and is no longer returned by the distribution thread.
  • the resource is allocated as a resource block corresponding to the thread, and the request is distributed to the thread corresponding to the resource block by the distribution thread for processing, thereby avoiding resource lockup and improving the response speed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)

Description

多线程处理方法和装置 技术领域
本发明涉及线程处理技术, 特别涉及一种多线程处理方法和装置。 背景技术
随着计算机技术的不断发展, 用户对计算机性能的要求也越来越高; 为了提高计算机系统性能, 很多软件系统都使用了多线程技术最大限度地 挖掘硬件的潜力。 釆用了多线程技术的应用程序可以更好地利用系统资源。 其主要优势在于充分利用了 CPU的空闲时间片, 可以用尽可能少的时间来 对用户的要求作出响应, 使得进程的整体运行效率得到较大提高, 同时增 强了应用程序的灵活性。
在多线程系统中, 如果需要对同一资源进行操作就要使用锁。 所谓 "锁", 是为了让该线程独自占有空间, 在自身线程处理没有完成之前, 不 允许其他线程占用自身正在使用的资源。 锁的使用是为了某一个线程在操 作某一件事务时, 独自占有资源, 并把自身占用的资源锁起来, 在自身没 有解锁之前, 其他线程只能等待, 不能抢占。
在具体实施过程中, 本发明的发明人发现, 使用锁的次数过于频繁, 对系统的性能就会造成影响; 在并发请求数较多时, 一个请求等待另一个 请求释放资源的概率就会大大增加, 请求的响应时间就会增大, 会造成部 分请求超时。 发明内容
本发明的目的之一为提供一种多线程处理方法和装置, 可提升系统性 能, 避免出现因为锁的应用造成锁死产生的响应时间增大。 本发明提出一种多线程处理方法, 包括:
分发线程获取请求消息;
所述分发线程根据所述请求消息携带的参数确定需要访问的预设的资 源块;
所述分发线程将所述请求消息分发到所述资源块对应的处理线程。 进一步, 所述分发线程将请求消息分发到资源块对应的线程包括: 所述分发线程根据所述资源块预设的标识位查找分配了对应标识位的 处理线程;
将所述请求消息分发到对应标识位的处理线程。
进一步, 所述将请求消息分发到对应标识位的处理线程具体为: 根据所述请求消息包含的参数, 发送所述参数到所述对应标识位的处 理线程。
进一步, 所述分发线程发送请求消息分发到资源块对应的处理线程包 括:
分发线程将请求消息发送方的相关信息跟随请求消息分发到对应的处 理线程。
进一步, 所述分发线程发送请求消息分发到资源块对应的线程后包括: 处理线程完成业务处理后, 向请求发送方返回响应消息。
一种多线程处理装置, 包括:
获取模块, 用于获取请求消息;
选择模块, 用于根据所述请求消息携带的参数确定需要访问的预设的 资源块;
分发模块, 用于将所述请求消息分发到所述资源块对应的处理线程。 进一步, 所述分发模块包括:
查找模块, 用于根据所述资源块预设的标识位查找分配了对应标识位 的处理线程;
发送模块, 用于将所述请求消息分发到对应标识位的处理线程。
进一步, 所述发送模块具体用于根据所述请求消息包含的参数, 发送 所述参数到所述对应标识位的处理线程。
进一步, 所述分发模块还用于将请求消息发送方的相关信息跟随请求 消息分发到对应的处理线程。
进一步, 所述装置还包括:
业务处理模块, 用于处理具体业务相关的流程;
响应模块, 用于完成业务处理后, 向请求发送方返回响应消息。
本发明实施例通过将资源分配为与线程对应的资源块, 通过分发线程 将请求分发到资源块对应的线程上进行处理, 避免了资源锁的使用及死锁, 提高了响应速度。 附图说明
图 1为本发明一种多线程处理方法一实施例的流程图;
图 2为本发明一种多线程处理方法一实施例的架构示意图;
图 3为本发明一种多线程处理装置一实施例的结构示意图。 具体实施方式
应当理解, 此处所描述的具体实施例仅仅用以解释本发明, 并不用于 限定本发明。
参照图 1 , 为本发明一种多线程处理方法一实施例的流程图。
本发明实施例预先根据共享资源的特性将资源划分为多个块, 每块资 源设置一个处理线程, 每个线程只能访问划分给它的对应资源块。 预先设 置一个分发线程来分发所有来自外部的请求。 分发线程根据请求携带的参 数(例如用户手机号码、 IP地址等等)确定需要访问的资源块, 然后把该 请求分发到资源块对应的线程进行处理。 架构图如图 2所示。
具体的实施步骤以请求携带手机号码为例进行说明。 根据请求携带的 参数(即手机号码) 来划分资源。 因为保存用户信息需要占用系统内存, 根据手机号码的尾号 0到 9将系统需要使用的内存划分为 10块, 并编号为 0-9 , 为每块内存设置一个线程, 这样整个系统就存在一个分发线程和 10 个处理线程。 分发线程从请求中获取特定参数手机号码, 根据该参数定位 到需要转发的处理线程。 处理线程收到来自分发线程的请求后访问各自的 资源块完成业务处理。 具体的如下步骤:
步骤 S 101、 分发线程获取请求消息;
当外部系统需要进行一个关于手机号码的处理时, 发送一个包含手机 号码的请求消息。
步骤 S102、 分发线程确定请求消息需要访问的资源块;
分发线程根据请求消息携带的参数确定需要访问的预设的资源块。 分 发线程解析请求消息中的参数, 获取到特定参数, 例如手机号码, 根据参 数计算出需要分发到哪个业务线程, 如手机号码的最后一位是 3 则应当发 送到 3号线程。
步骤 S103、 分发线程将请求消息分发到资源块对应的处理线程。
分发线程确定好请求消息需要访问的资源块后, 查找资源块对应的处 理线程, 然后组装分发消息, 即在原请求消息的基础上添加一些信息如请 求发送方的相关信息如发送方的 IP和端口,然后转发给对应 3号处理线程。
在本发明实施例中, 资源块与线程都预先分配有标识位, 本发明实施 例以 ID作为标识位 , 而资源块和其对应的线程分配的 ID是相同的。 所以 分发线程可以根据请求消息需要访问的资源块预设的 ID 查找分配了对应 ID的处理线程; 然后将请求消息分发到对应 ID的处理线程。
分发线程负责分发, 不作任何业务处理, 需要从请求消息中获取必要 的参数, 例如用户手机号码、 IP 地址等等, 该参数用于定位该请求需要访 问的资源, 例如根据手机号码的最后一位将该请求分发到对应的处理线程, 手机尾号为 0的分发到 0号处理线程, 以此类推, 尾号为 9的分发到 9号 处理线程。 定位到资源块后, 由于资源块与处理线程是——对应的关系, 分发线程就可以将该请求消息分发到对应的处理线程。
分发线程需要将请求消息发送方的相关信息 (如发送方的 IP地址和端 口等等)跟随请求消息一起分发到对应的处理线程, 本地不緩存请求消息 的任何信息。
进一步的, 分发线程将请求消息发送方的相关信息跟随请求消息分发 到对应的处理线程, 以便于处理线程处理完后根据发送方的相关信息发送 响应消息。
在前述步骤的基础上, 本发明一种多线程处理方法还包括: 步骤 S104、 处理线程完成业务处理后, 向请求发送方返回响应消息。
处理线程在收到请求后进行相关的业务处理, 然后直接向请求消息发 送方返回响应消息, 不再通过分发线程返回。
在本发明实施例中处理线程只能访问属于自身的资源块, 各处理线程 完全独立, 没有交互, 各资源块也相互独立, 没有关联。 由于各处理线程 访问的是划分给自身的独占资源块, 不存在互斥, 因此不需要使用锁。
本发明实施例通过将资源分配为与线程对应的资源块, 通过分发线程 将请求分发到资源块对应的线程上进行处理, 避免了资源锁死, 提高了响 应速度。
请参阅图 3 , 为本发明一种多线程处理装置一实施例的结构示意图。 本发明实施例提供一种多线程处理装置, 包括:
获取模块 31 , 用于获取请求消息;
选择模块 32, 用于根据请求消息携带的参数确定需要访问的预设的资 源块;
选择模块 32根据请求消息携带的参数确定需要访问的预设的资源块。 选择模块 32解析请求消息中的参数, 获取到特定参数, 例如手机号码, 根 据参数计算出需要分发到哪个业务线程, 如手机号码的最后一位是 3 , 则应 当发送到 3号线程。
分发模块 33 , 用于将请求消息分发到资源块对应的处理线程。
分发模块 33确定好请求消息需要访问的资源块后, 查找资源块对应的 处理线程, 然后组装分发消息, 即在原请求消息的基础上添加一些信息如 请求发送方的相关信息如发送方的 IP和端口, 然后转发给对应 3号处理线 程。
进一步地, 分发模块 33包括:
查找模块 331 ,用于根据资源块预设的 ID查找分配了对应 ID的处理线 程;
在本发明实施例中, 资源块与线程都预先分配有 ID, 而资源块和其对 应的线程分配的 ID是相同的。 所以查找模块 331可以根据请求消息需要访 问的资源块预设的 ID查找分配了对应 ID的处理线程。
发送模块 332 , 用于将请求消息分发到对应 ID的处理线程。
进一步地, 发送模块 332具体用于根据请求消息包含的参数, 发送参 数到对应 ID的处理线程。
进一步地, 分发模块 33还用于将请求消息发送方的相关信息跟随请求 消息分发到对应的处理线程。
进一步地, 所述装置还包括:
业务处理模块 34, 用于进行相关的业务处理。
响应模块 35 , 用于完成业务处理后, 向请求发送方返回响应消息。 处理线程在收到请求后由业务处理模块 34进行相关的业务处理, 然后 响应模块 35直接向请求消息发送方返回响应消息,不再通过分发线程返回。 本发明实施例通过将资源分配为与线程对应的资源块, 通过分发线程 将请求分发到资源块对应的线程上进行处理, 避免了资源锁死, 提高了响 应速度。
以上所述仅为本发明的优选实施例, 并非因此限制本发明的专利范围, 凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换, 或直 接或间接运用在其他相关的技术领域, 均同理包括在本发明的专利保护范 围内。

Claims

权利要求书
1、 一种多线程处理方法, 其特征在于, 所述方法包括:
分发线程获取请求消息;
所述分发线程根据所述请求消息携带的参数确定需要访问的预设的资 源块;
所述分发线程将所述请求消息分发到所述资源块对应的处理线程。
2、 根据权利要求 1所述的多线程处理方法, 其特征在于, 所述分发线 程将请求消息分发到资源块对应的线程具体为:
所述分发线程根据所述资源块预设的标识位查找分配了对应标识位的 处理线程;
将所述请求消息分发到对应标识位的处理线程。
3、 根据权利要求 2所述的多线程处理方法, 其特征在于, 所述将请求 消息分发到对应标识位的处理线程具体为:
根据所述请求消息包含的参数, 发送所述参数到所述对应标识位的处 理线程。
4、 根据权利要求 1或 2所述的多线程处理方法, 其特征在于, 所述分 发线程发送请求消息分发到资源块对应的处理线程具体为:
分发线程将请求消息发送方的相关信息跟随请求消息分发到对应的处 理线程。
5、 根据权利要求 1至 3任一所述多线程处理方法, 其特征在于, 所述 分发线程发送请求消息分发到资源块对应的线程后还包括:
处理线程完成业务处理后, 向请求发送方返回响应消息。
6、 一种多线程处理装置, 其特征在于, 所述装置包括获取模块、 选择 模块和分发模块; 其中:
获取模块, 用于获取请求消息; 选择模块, 用于根据所述请求消息携带的参数确定需要访问的预设的 资源块;
分发模块, 用于将所述请求消息分发到所述资源块对应的处理线程。
7、 根据权利要求 6所述的多线程处理装置, 其特征在于, 所述分发模 块进一步包括查找模块和发送模块; 其中:
查找模块, 用于根据所述资源块预设的标识位查找分配了对应标识位 的处理线程;
发送模块, 用于将所述请求消息分发到对应标识位的处理线程。
8、 根据权利要求 7所述的多线程处理装置, 其特征在于, 所述发送模 块具体用于根据所述请求消息包含的参数, 发送所述参数到所述对应标识 位的处理线程。
9、 根据权利要求 6或 7所述的多线程处理装置, 其特征在于, 所述分 发模块还用于将请求消息发送方的相关信息跟随请求消息分发到对应的处 理线程。
10、 根据权利要求 6至 8任一所述的多线程处理装置, 其特征在于, 所述装置还包括业务处理模块和响应模块; 其中:
业务处理模块, 用于处理具体业务相关的流程;
响应模块, 用于完成业务处理后, 向请求发送方返回响应消息。
PCT/CN2010/076854 2010-04-16 2010-09-13 多线程处理方法和装置 WO2011127716A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010152572.3 2010-04-16
CN201010152572A CN101826003A (zh) 2010-04-16 2010-04-16 多线程处理方法和装置

Publications (1)

Publication Number Publication Date
WO2011127716A1 true WO2011127716A1 (zh) 2011-10-20

Family

ID=42689935

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/076854 WO2011127716A1 (zh) 2010-04-16 2010-09-13 多线程处理方法和装置

Country Status (2)

Country Link
CN (1) CN101826003A (zh)
WO (1) WO2011127716A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826003A (zh) * 2010-04-16 2010-09-08 中兴通讯股份有限公司 多线程处理方法和装置
CN101917525A (zh) * 2010-09-15 2010-12-15 烽火通信科技股份有限公司 呈现业务中通知消息任务的处理方法和装置
CN104216684B (zh) * 2013-06-04 2017-05-31 阿里巴巴集团控股有限公司 一种多核并行系统及其数据处理方法
CN103559097B (zh) * 2013-10-18 2017-06-09 北京奇虎科技有限公司 一种浏览器中进程间通信的方法、装置和浏览器
CN105378652B (zh) * 2013-12-24 2018-02-06 华为技术有限公司 线程共享资源分配方法及装置
CN107153653B (zh) * 2016-03-03 2020-06-26 阿里巴巴集团控股有限公司 一种分库分表的轮询访问方法及装置
CN106201705B (zh) * 2016-07-25 2019-10-08 东软集团股份有限公司 处理消息的方法及装置
CN106528299B (zh) * 2016-09-23 2019-12-03 北京华泰德丰技术有限公司 数据处理方法及装置
CN107861799B (zh) * 2016-12-28 2020-12-25 平安科技(深圳)有限公司 基于多线程环境的任务处理方法及装置
CN108462682A (zh) * 2017-02-22 2018-08-28 成都鼎桥通信技术有限公司 初始对话协议sip消息的分发方法和装置
CN109032767B (zh) * 2018-07-26 2021-04-02 苏州科达科技股份有限公司 异步多进程的业务处理系统、方法、装置及存储介质
CN109815258A (zh) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 一种数据处理的方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874538A (zh) * 2005-07-20 2006-12-06 华为技术有限公司 一种呼叫事件并发处理方法
CN101127685A (zh) * 2007-09-20 2008-02-20 中兴通讯股份有限公司 一种进程间通讯装置及其进程间通讯方法
CN101826003A (zh) * 2010-04-16 2010-09-08 中兴通讯股份有限公司 多线程处理方法和装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7551617B2 (en) * 2005-02-08 2009-06-23 Cisco Technology, Inc. Multi-threaded packet processing architecture with global packet memory, packet recirculation, and coprocessor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874538A (zh) * 2005-07-20 2006-12-06 华为技术有限公司 一种呼叫事件并发处理方法
CN101127685A (zh) * 2007-09-20 2008-02-20 中兴通讯股份有限公司 一种进程间通讯装置及其进程间通讯方法
CN101826003A (zh) * 2010-04-16 2010-09-08 中兴通讯股份有限公司 多线程处理方法和装置

Also Published As

Publication number Publication date
CN101826003A (zh) 2010-09-08

Similar Documents

Publication Publication Date Title
WO2011127716A1 (zh) 多线程处理方法和装置
JP6882511B2 (ja) ブロックチェーンコンセンサスのための方法、装置およびシステム
US8713186B2 (en) Server-side connection resource pooling
US8209690B2 (en) System and method for thread handling in multithreaded parallel computing of nested threads
US10146702B2 (en) Memcached systems having local caches
US20160306680A1 (en) Thread creation method, service request processing method, and related device
WO2015090244A2 (zh) 访问元数据的方法、服务器及系统
CN109120614B (zh) 基于分布式系统的业务处理方法及装置
EP3161669B1 (en) Memcached systems having local caches
CN106991008B (zh) 一种资源锁管理方法、相关设备及系统
WO2013181939A1 (zh) 通信设备硬件资源的虚拟化管理方法及相关装置
US10360057B1 (en) Network-accessible volume creation and leasing
CN102591726A (zh) 一种多进程通信方法
CN103399894A (zh) 一种基于共享存储池的分布式事务处理方法
US10397317B2 (en) Boomerang join: a network efficient, late-materialized, distributed join technique
KR20140070611A (ko) 트랜잭셔널 미들웨어 머신 환경에서 단일 포인트 병목을 방지하는 시스템 및 방법
CN113190528B (zh) 一种并行分布式大数据架构构建方法及系统
CN111290842A (zh) 一种任务执行方法和装置
WO2019056263A1 (zh) 计算机存储介质、嵌入式调度方法及系统
US11544069B2 (en) Universal pointers for data exchange in a computer system having independent processors
Singh et al. A priority heuristic policy in mobile distributed real-time database system
Lam et al. TSHMEM: shared-memory parallel computing on Tilera many-core processors
CN103647712A (zh) 分布式路由处理业务的方法及系统
CN113590323A (zh) 面向MapReduce的数据传输方法、装置、设备及存储介质
KR101512647B1 (ko) 질의처리엔진을 선택하는 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10849725

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10849725

Country of ref document: EP

Kind code of ref document: A1