WO2012171398A1 - 实时处理系统中的共享资源访问方法和实时处理系统 - Google Patents

实时处理系统中的共享资源访问方法和实时处理系统 Download PDF

Info

Publication number
WO2012171398A1
WO2012171398A1 PCT/CN2012/073555 CN2012073555W WO2012171398A1 WO 2012171398 A1 WO2012171398 A1 WO 2012171398A1 CN 2012073555 W CN2012073555 W CN 2012073555W WO 2012171398 A1 WO2012171398 A1 WO 2012171398A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
ipu
priority
ico
configuration command
Prior art date
Application number
PCT/CN2012/073555
Other languages
English (en)
French (fr)
Inventor
吴青
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2012171398A1 publication Critical patent/WO2012171398A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Definitions

  • the present invention relates to the field of communications, and in particular to a shared resource access method and a real-time processing system in a real-time processing system.
  • real-time processing of data is an important consideration for the system.
  • multi-thread processing mechanism is often adopted. In the process of multi-thread processing, it may happen that more than two threads access the same data at the same time, so it is necessary to consider the access processing problem of the shared resources in the system.
  • multi-thread processing it may happen that more than two threads access the same data at the same time, so it is necessary to consider the access processing problem of the shared resources in the system.
  • semaphore locks to ensure mutually exclusive access to resources.
  • Embodiments of the present invention provide a shared resource access method and a real-time processing system in a real-time processing system, so as to at least solve the problem that the semaphore locking method easily causes the system to hang.
  • a shared resource access method in a real-time processing system including: starting a thread of a real-time processing system; wherein the real-time processing system includes a plurality of IPUs, each IPU includes: a CP thread and ICO thread, the priority relationship of the thread in the IPU is: priority of the CP thread > priority of the ICO thread; receiving the configuration command input by the user, buffering the configuration command in the CDB; wherein, the thread priority of the CDB ⁇ ICO thread Priority; within each IPU, shared resources are accessed according to the priority of each thread.
  • the accessing the shared resource according to the priority of each thread includes: reading the configuration command from the CDB when the real-time processing system is in an idle state; determining the CP thread corresponding to the IPU according to the configuration command, and sending the configuration command to the determined CP thread;
  • the CP thread processes the shared resource according to the configuration command. Among them, when reading the configuration command from the CDB, it is read according to the FIFO principle.
  • the determining, by the configuration command, the CP thread of the corresponding IPU includes: querying the command mapping table according to the configuration command, and storing the correspondence between the command set and the CP thread in the command mapping table; determining the CP thread corresponding to the IPU according to the structure of the query.
  • the above-mentioned configuration command is sent to the determined CP thread by asynchronous transmission.
  • the above accessing the shared resources according to the priority of each thread includes: When the running time of the ICO thread arrives, the ICO thread is operated to access the shared resource. The delay operation of the operating system is prohibited during the running of the above IPU thread.
  • the functions of the plurality of IPUs are independent.
  • the shared memory operation between threads in each of the above IPUs does not need to be protected by any mechanism. If data needs to be transmitted between two IPUs, the foregoing method further includes: sending, by the IPU that initiates the data transmission, the notification information to the IPU that receives the data.
  • the sending, by the IPU that initiates the data transmission, the notification information to the IPU that receives the data includes: determining whether the thread that initiates the data transmission is an ICO thread, and if yes, determining whether the thread receiving the data is a CP thread; if it is a CP thread, adjusting two
  • the priority of the ICO thread in the IPU is such that the priority of the first ICO thread is less than the priority of the second ICO thread; wherein, the first ICO thread is an ICO thread in the IPU that initiates the data transmission, and the second ICO thread is the receiving The ICO thread in the IPU of the data.
  • a real-time processing system including: a thread startup module, configured to start a thread of a real-time processing system; wherein the real-time processing system includes a plurality of IPUs, each IPU includes: a CP thread And the ICO thread, the priority relationship of the threads in the IPU is: the priority of the CP thread > the priority of the ICO thread; the configuration command cache module is set to receive the configuration command input by the user, and the configuration command is cached in the CDB; CDB thread priority ⁇ ICO thread priority; resource access module, set to access shared resources according to the priority of each thread within each IPU.
  • the resource access module includes: a configuration command reading unit configured to read a configuration command from the CDB when the real-time processing system is in an idle state; the configuration command sending unit is configured to determine a CP thread corresponding to the IPU according to the configuration command, Sending a configuration command to the determined CP thread; the processing unit is configured to process the shared resource according to the configuration command by the CP thread.
  • FIG. 1 is a flowchart of a shared resource access method in a real-time processing system according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic structural diagram of a real-time processing system according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic diagram of a real-time processing system according to the present invention
  • FIG. 1 is a flowchart of a shared resource access method in a real-time processing system according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic structural diagram of a real-time processing system according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic diagram of a real-time processing system according to the present invention
  • FIG. 1 is a flowchart of a shared resource access method in a real-time processing system according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic structural diagram of a real-time processing system according to Embodiment 1 of the present invention
  • FIG. 3 is a schematic
  • FIG. 4 is a schematic diagram of message transmission when the advertisement information is initiated by the CP according to Embodiment 1 of the present invention
  • FIG. 5 is a message when the advertisement information is initiated by the ICO according to Embodiment 1 of the present invention
  • FIG. 6 is a schematic structural diagram of a single board embedded software system according to Embodiment 1 of the present invention
  • FIG. 7 is a structural block diagram of a real-time processing system according to Embodiment 2 of the present invention
  • FIG. 8 is a block diagram of a real-time processing system according to Embodiment 2 of the present invention
  • an embodiment of the present invention utilizes the thread priority to circumvent the processing when the resource sharing access is used, and can be applied to the multi-thread single verification time processing system, and can implement mutually exclusive access to the shared resource.
  • an embodiment of the present invention provides a shared resource access method and a real-time processing system in a real-time processing system.
  • Embodiment 1 This embodiment provides a shared resource access method in a real-time processing system. Referring to FIG. 1, the method includes the following steps (steps S102-106): Step S102: Start a thread of the real-time processing system.
  • the real-time processing system of the embodiment includes multiple independent processing units (IPUs), and each IPU includes: a Config Process (CP) thread and a real-time Information Collection and Operation (ICO) thread, the priority relationship of threads in IPU is: priority of CP thread > priority of ICO thread;
  • CP Config Process
  • ICO real-time Information Collection and Operation
  • one CP thread and one can be set in one IPU
  • the IC0 thread, the CP thread and the IC0 thread in an IPU can share resources.
  • the multiple real-time information collection and operation commands need to access the shared resources in the IPU, the multiple real-time information collection and operation commands can be set.
  • the purpose of mutually exclusive access to the shared resource is achieved by setting a period triggering moment of the acquisition and operation command; Step S104, receiving a configuration command input by the user, and buffering the configuration command in the configuration distribution buffer
  • Step S106 in each IPU, accessing the shared resource according to the priority of each thread.
  • the division of the IPU and the composition of the CP and IC0 in the system are determined, and the priorities of the CP and IC0 in each unit are divided, and the use is avoided.
  • the semaphore is locked before the corresponding shared resources to restrict access, which is equivalent to eliminating the source of faults caused by shared resource access, and can also improve the system architecture design, thereby improving the reliability, maintainability and ease of use of the system.
  • the access resource is accessed according to the priority of the thread, and the access conflict when multiple threads share the resource is avoided, and the problem that the system is suspended due to the semaphore locking method is solved, and the stability and reliability of the system are enhanced.
  • the accessing the shared resource according to the priority of each thread in the foregoing step S106 may include: reading a configuration command from the CDB when the real-time processing system is in an idle state; determining a CP thread corresponding to the IPU according to the configuration command, and configuring the configuration The command is sent to the determined CP thread; the CP thread processes the shared resource according to the configuration command.
  • FIFO First In First Out
  • a command mapping table may be set in the CDB, and the correspondence between the command set and the CP is saved in the mapping table.
  • the determining the CP thread of the corresponding IPU according to the configuration command includes: querying the command mapping table according to the configuration command, where the command mapping table stores the correspondence between the command set and the CP thread; and determining the CP thread corresponding to the IPU according to the structure of the query.
  • the foregoing sending the configuration command to the determined CP thread may be sent by using an asynchronous sending manner, that is, after the configuration command is sent, the CDB may perform the following operations.
  • the ICO thread executes real-time information acquisition and operation commands, so it determines whether to run according to the configured time period. Based on this, the foregoing accessing the shared resource according to the priority of each thread further includes: when the running time of the ICO thread arrives, running the ICO thread to access the shared resource. In order to avoid the hang of the thread in the IPU during the running process and the mutual exclusion mechanism of the shared resource, the delay operation of the operating system is prohibited during the thread running of the IPU in this embodiment. In order to achieve simplicity, when the IPU is divided in the system, the system function association between multiple IPUs may be minimized or not related to each other. For example, the functions of multiple IPUs are independent, and as many as possible between multiple IPUs are ensured. Share data information.
  • the shared memory operations between threads within each IPU do not require any mechanism for protection. If data needs to be transmitted between the two IPUs, the shared data exists between the two IPUs. In this case, the foregoing method further includes: the IPU that initiates the data transmission sends the notification information to the IPU that receives the data. In order to ensure the efficiency of the system, the notification information can be transmitted asynchronously. The following principles can be followed when advertising: If there is a need to transfer data between two IPUs (for example, if two IPUs require common configuration information or data interaction between two IPUs), in order to avoid access to shared resources.
  • the method further includes: determining whether the thread that initiates the data transmission is an ICO thread, and if yes, determining whether the thread receiving the data is a CP thread; if it is a CP thread, adjusting the priority of the ICO thread in the two IPUs, so that The priority of the first ICO thread is less than the priority of the second ICO thread; wherein, the first ICO thread is an ICO thread in an IPU that initiates data transmission, and the second ICO thread is an ICO thread in an IPU that receives data.
  • the above CP thread may also be referred to as CP for short, and the ICO thread may also be referred to as ICO for short; the following describes the design and operation process of the system according to the above method:
  • the real-time processing system it needs to receive user configuration and query processing. At the same time, some real-time data acquisition and calculation are needed.
  • 2 is a schematic diagram of a real-time processing system shown in FIG. 2, in which a thread having a lower priority is set.
  • the thread is referred to as a configuration distribution buffer, that is, the CDB, and the CDB is responsible for receiving an input system. Commands are distributed to different independent processing units IPU for processing.
  • the content of the real-time acquisition and operation and the user configuration command set are classified according to whether or not the shared data needs to be classified, and are divided into several IPUs.
  • the threads inside the IPU can have direct data sharing, and the IPU tries to ensure that there is no shared data to be accessed.
  • the configuration command processing CP is processed by the high-priority thread, that is, the CP is responsible for receiving and processing the external configuration of the IPU; the related information collection and operation ICO has a relatively lower priority.
  • Several threads are responsible for real-time processing, that is, ICO performs real-time processing of the IPU (requires no shared resources between ICOs within the IPU, and ICOs and CPs within the IPU can share resources).
  • Step S302 starting a thread in the system, including: 1) starting a thread with a priority of M (the priority corresponding to the M value is smaller) as a CDB, responsible for Cache configuration command;
  • Step S304 receiving a configuration command input by the user to the system, and buffering the configuration command in the CDB;
  • Step S306 ICO collects and calculates information on shared resources during information collection and calculation;
  • Step S308 when the system is idle, determine whether the configuration command list of the CDB cache is empty, and if yes, return to step S304; otherwise, execute step S310; step S310, the CDB pops up the cached configuration command according to the FIFO principle, and the query command mapping table
  • the CmdMap determines the CP corresponding to the IPU, and sends the command to the CP process asynchronously.
  • Step S312 the CP completes the processing of the shared resource according to the command configuration, and returns to step S310 to continue processing other configuration commands of the cache.
  • the configuration information is required between the IPUs and the data exchange between the IPUs is required, the information can be advertised asynchronously between the IPUs. In this case, you need to consider the restrictions in different scenarios.
  • the thread priority in the IPU that receives the advertisement does not need to be added with a new restriction condition.
  • the schematic diagram of the message transmission when the advertisement information is initiated by the CP is shown in Figure 4.
  • the CP that sends the IPU is CP1
  • ICO are CP2 and IC02, respectively. Due to the role of CDB, the scheduling relationship between CP1 and CP2 is mutually exclusive, so only the priority relationship between CP1 and IC02 needs to be considered.
  • the message transmission diagram is shown in Figure 5.
  • the ICO of the transmitting IPU is IC01
  • the CP and ICO of the receiving IPU are CP2 and IC02, respectively.
  • IC01 will interrupt IC02 only when the thread priority of IC01 is greater than IC02. If the announcement is sent to CP2,
  • Execution of CP2 may cause conflicts in shared resource access. Therefore, when the IPU needs to send the announcement information to the CP of the receiving IPU by the ICO, the thread priority of the sender ICO can be set to be smaller than the thread priority of the receiving end ICO. And if the announcement information is processed by the ICO receiving the IPU, there is no problem. At the same time, it can be seen that when the receiving end combines ICO and CP, it is not subject to the conditions described herein.
  • the following is an example of a single-board embedded software system in the actual communication system.
  • the system block diagram is shown in Figure 6.
  • the main control unit configures relevant information to the board system through a communication protocol.
  • the board detects and processes the alarm performance in real time.
  • the shared resource access method includes the following steps: Step 1: Start the thread of the single board system, including: Start the CDB thread, the priority is 2; Start the alarm performance unit AlmPerflPU thread AlmPerfCP (the alarm performance unit configuration command processing thread) and AlmPerflCO (The real-time information collection and operation thread of the alarm performance unit), the priority of the thread is 5, 3, and the correspondence between the alarm performance command set and the AlmPerfCP is registered in the command mapping table CmdMap in the CDB; Start the service unit ServicelPU related thread ServiceCP (business unit configuration command processing thread) and ServicelCO (business unit real-time information collection and operation thread), the priority of which is 8, 6, the alarm performance command set to be processed and ServiceCP The corresponding relationship is registered in the command mapping table CmdMap in the CDB.
  • Step 2 The user inputs a service configuration and alarm performance configuration command to the system, and the command is cached in the CDB.
  • Step 3 When ServicelCO or AlmPerflCO is executed, the jump is performed.
  • the CDB When the system is idle, the CDB first pops up the service configuration command according to the FIFO (First In First Out) principle, and queries the command mapping table CmdMap to send the command asynchronously to the ServiceCP.
  • Step 5 The AlmPerfCP completes the processing of the alarm performance node corresponding to the addition and deletion service according to the advertised service addition and deletion information;
  • Step 6 The CDB pop-up alarm performance configuration command, Asynchronously sent to AlmPerfCP for processing;
  • Step 7 Al The mPerfCP is processed according to the alarm performance configuration command.
  • Step 8 The ServicelCO cycle queries the real-time information of the service and performs the processing of the service protocol. Assume that the service has protection switching, and then asynchronously announces the switching action to AlmPerflCO, and AlmPerflCO completes the switch of the alarm performance detection.
  • Step 9 When the period of the AlmPerflCO is expired, the alarm performance information is queried, and the alarm performance is reported.
  • Step 10 Return to the second step and continue to receive user configuration command processing.
  • the division of the IPU in the system and the composition of the CP and ICO are determined, and the priorities of each unit are divided, and the semaphore is avoided. Locking the shared resources before access restriction is equivalent to eliminating the source of the failure caused by shared resource access. It can also improve the system architecture design, thereby improving the reliability, maintainability and ease of use of the system.
  • Embodiment 2 This embodiment provides a real-time processing system. Referring to FIG.
  • the thread startup module 72 is configured to start a thread of the real-time processing system; wherein, the real-time processing system includes multiple IPUs, and each IPU includes: a CP thread and an ICO thread, and priority relationships of threads in the IPU are: priority of the CP thread
  • the priority of the ICO thread the configuration command cache module 74, connected to the thread startup module 72, is configured to receive a configuration command input by the user, and cache the configuration command in the CDB; wherein, the thread priority of the CDB ⁇ the priority of the ICO thread
  • the resource access module 76 is connected to the configuration command cache module 74, and is configured to access the shared resources according to the priorities of the respective threads in each IPU.
  • the access resource is accessed according to the priority of the thread, and the access conflict when multiple threads share the resource is avoided, and the problem that the system is suspended due to the semaphore locking method is solved, and the stability and reliability of the system are enhanced. Referring to FIG.
  • the resource access module 76 includes: a configuration command reading unit 762 configured to read a configuration command from the CDB when the real-time processing system is in an idle state; a configuration command transmitting unit 764 connected to the configuration command reading unit 762 And setting the CP thread corresponding to the IPU according to the foregoing configuration command, and sending the configuration command to the determined CP thread; the processing unit 766 is connected to the configuration command sending unit 764, and configured to process the shared resource according to the configuration command by using the CP thread.
  • the configuration command reading unit 762 can read the configuration command according to the FIFO principle when reading the configuration command from the CDB.
  • a command mapping table may be set in the CDB, and the correspondence between the command set and the CP is saved in the mapping table.
  • the configuration command sending unit 764 determines the CP thread of the corresponding IPU according to the configuration command, and includes: querying the command mapping table according to the configuration command, where the command mapping table stores the correspondence between the command set and the CP thread; IPU's CP thread.
  • the configuration command sending unit 764 sends the configuration command to the determined CP thread, which can be sent by the asynchronous sending mode. After the configuration command is sent, the CDB can perform the following operations.
  • the above ICO thread executes real-time information acquisition and operation commands, so it determines whether to run according to the configured time period.
  • the resource access module 76 further includes: an ICO thread access unit, configured to run the ICO thread to access the shared resource when the running time of the ICO thread arrives.
  • an ICO thread access unit configured to run the ICO thread to access the shared resource when the running time of the ICO thread arrives.
  • the delay operation of the operating system is prohibited during the thread running of the IPU in this embodiment.
  • the system function association between multiple IPUs can be minimized or not related to each other. For example, the functions of multiple IPUs are independent, and there is no shared data between the multiple IPUs. .
  • the IPU that initiates the data transmission sends an announcement message to the IPU that receives the data.
  • the system further includes: a first determining module, configured to determine whether the thread that initiates the data transmission is an ICO thread; and a second determining module, when the determining result of the first determining module is yes, Determining whether the thread receiving the data is a CP thread; the priority adjustment module, when the determination result of the second determining module is a CP thread, adjusting the priority of the ICO thread in the two IPUs, so that the priority of the first ICO thread is less than The priority of the second ICO thread; wherein the first ICO thread is an ICO thread in an IPU that initiates data transmission, and the second ICO thread is an ICO thread in an IPU that receives data.
  • the access resource is accessed according to the priority of the thread, and the access conflict when multiple threads share the resource is avoided, and the problem that the system is suspended due to the semaphore locking method is solved, and the stability and reliability of the system are enhanced.
  • the above embodiment can determine the division of IPU and the composition of CP and ICO in the system according to the analysis of key models of the system and the distribution of shared resources in the system design stage, and divide the CPs in each unit.
  • ICO priority to avoid the use of semaphores to lock access before the corresponding shared resources, is equivalent to eliminating the source of failures caused by shared resource access, can also improve the system architecture design, thereby improving system reliability, Maintainability and ease of use.
  • modules or steps of the present invention can be implemented by a general-purpose computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps are fabricated as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • the above is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes can be made to the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Abstract

本发明公开了一种实时处理系统中的共享资源访问方法和实时处理系统。其中,该方法包括:启动实时处理系统的线程;其中,该实时处理系统包括多个IPU,每个IPU包括:CP线程和ICO线程,IPU内的线程的优先级关系为:CP线程的优先级>ICO线程的优先级;接收用户输入的配置命令,将配置命令缓存在CDB中;其中,CDB的线程优先级<ICO线程的优先级;在每一个IPU内,根据各个线程的优先级访问共享资源。通过本发明,解决了因信号量加锁的方式容易导致系统挂起的问题,增强了系统的稳定性和可靠性。

Description

实时处理系统中的共享资源访问方法和实时处理系统 技术领域 本发明涉及通信领域, 具体而言, 涉及一种实时处理系统中的共享资源访问方法 和实时处理系统。 背景技术 对于实时处理系统, 数据的实时处理是系统的重要考量指标。 为了提高数据的并 行处理效率, 往往采用多线程处理机制。 多线程处理过程中, 可能会发生两个以上的 线程同时访问相同的数据的情况, 因此需要考虑对系统中共享资源的访问处理问题。 目前, 在多线程访问共享数据时, 通常的做法是引入信号量加锁来保证资源的互斥访 问。 但这种信号量加锁的方法往往容易产生漏锁或死锁等问题, 使得在系统调试阶段 容易发生异常挂起 (即, 死机) 等现象, 并且该异常情况的定位比较困难。 针对相关技术中信号量加锁的方式容易导致系统挂起的问题, 目前尚未提出有效 的解决方案。 发明内容 本发明实施例提供了一种实时处理系统中的共享资源访问方法和实时处理系统, 以至少解决上述信号量加锁的方式容易导致系统挂起的问题。 根据本发明的一个实施例, 提供了一种实时处理系统中的共享资源访问方法, 包 括: 启动实时处理系统的线程; 其中, 该实时处理系统包括多个 IPU, 每个 IPU包括: CP线程和 ICO线程, IPU内的线程的优先级关系为: CP线程的优先级 >ICO线程的优 先级; 接收用户输入的配置命令, 将配置命令缓存在 CDB中; 其中, CDB的线程优 先级 <ICO线程的优先级; 在每一个 IPU内, 根据各个线程的优先级访问共享资源。 其中, 根据各个线程的优先级访问共享资源包括: 在实时处理系统处于空闲状态 时, 从 CDB中读取配置命令; 根据配置命令确定对应 IPU的 CP线程, 将配置命令发 送给确定的 CP线程; CP线程根据配置命令对共享资源进行处理。 其中, 从 CDB中读取配置命令时按照 FIFO原则读取。 其中, 根据配置命令确定对应 IPU的 CP线程包括: 根据配置命令查询命令映射 表,命令映射表中保存有命令集与 CP线程的对应关系;根据查询的结构确定对应 IPU 的 CP线程。 上述将配置命令发送给确定的 CP线程是通过异步发送方式发送的。 上述根据各个线程的优先级访问共享资源包括: 当 ICO线程的运行时间到达时, 运行 ICO线程访问共享资源。 上述 IPU的线程运行过程中禁止调用操作系统的延时操作。 其中, 上述多个 IPU的功能独立。 上述每一个 IPU内各线程间的共享内存操作不需要采用任何机制进行保护。 如果两个 IPU之间需要传输数据, 上述方法还包括: 发起数据传输的 IPU向接收 数据的 IPU发送通告信息。 其中, 上述发起数据传输的 IPU向接收数据的 IPU发送通告信息包括: 确定发起 数据传输的线程是否为 ICO线程, 如果是, 确定接收数据的线程是否为 CP线程; 如 果是 CP线程, 则调整两个 IPU中的 ICO线程的优先级, 使第一 ICO线程的优先级小 于第二 ICO线程的优先级; 其中, 第一 ICO线程为发起数据传输的 IPU中的 ICO线 程, 第二 ICO线程为接收数据的 IPU中的 ICO线程。 根据本发明的另一实施例, 提供了一种实时处理系统, 包括: 线程启动模块, 设 置为启动实时处理系统的线程; 其中, 该实时处理系统包括多个 IPU, 每个 IPU包括: CP线程和 ICO线程, IPU内的线程的优先级关系为: CP线程的优先级 >ICO线程的优 先级; 配置命令缓存模块, 设置为接收用户输入的配置命令, 将配置命令缓存在 CDB 中; 其中, CDB的线程优先级 <ICO线程的优先级; 资源访问模块, 设置为在每一个 IPU内, 根据各个线程的优先级访问共享资源。 其中, 上述资源访问模块包括: 配置命令读取单元, 设置为在实时处理系统处于 空闲状态时, 从 CDB中读取配置命令; 配置命令发送单元, 设置为根据配置命令确定 对应 IPU的 CP线程, 将配置命令发送给确定的 CP线程; 处理单元, 设置为通过 CP 线程根据配置命令对共享资源进行处理。 通过本发明, 根据线程的优先级访问共享资源, 规避多个线程共享资源时的访问 冲突, 解决了因信号量加锁的方式容易导致系统挂起的问题, 增强了系统的稳定性和 可靠性。 附图说明 此处所说明的附图用来提供对本发明的进一步理解, 构成本申请的一部分, 本发 明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的不当限定。 在附图 中: 图 1是根据本发明实施例 1的实时处理系统中的共享资源访问方法流程图; 图 2是根据本发明实施例 1的实时处理系统结构示意图; 图 3是根据本发明实施例 1的共享资源访问的处理流程图; 图 4是根据本发明实施例 1的通告信息由 CP发起时的消息发送示意图; 图 5是根据本发明实施例 1的通告信息由 ICO发起时的消息发送示意图; 图 6是根据本发明实施例 1的单板嵌入式软件系统的结构示意图; 图 7是根据本发明实施例 2的实时处理系统的结构框图; 图 8是根据本发明实施例 2的资源访问模块的结构框图。 具体实施方式 下文中将参考附图并结合实施例来详细说明本发明。 需要说明的是, 在不冲突的 情况下, 本申请中的实施例及实施例中的特征可以相互组合。 本发明实施例利用线程优先级来规避对资源共享访问时的处理, 可以应用在多线 程单核实时处理系统中, 能够实现对共享资源的互斥访问。 基于此, 本发明实施例提 供了一种实时处理系统中的共享资源访问方法和实时处理系统。 实施例 1 本实施例提供了一种实时处理系统中的共享资源访问方法, 参见图 1, 该方法包 括以下步骤 (步骤 S102-106): 步骤 S102, 启动实时处理系统的线程; 其中,本实施例的实时处理系统包括多个独立处理单元(Independent Process Unit, IPU), 每个 IPU包括: 配置命令处理(Config Process, CP)线程和实时信息采集与运 算 (Information Collection and Operation, ICO) 线程, IPU内线程的优先级关系为: CP线程的优先级 >ICO线程的优先级; 在实际实现时, 一个 IPU内可以设置一个 CP线程和一个 IC0线程, 在一个 IPU 内的 CP线程和 IC0线程可以共享资源, 当有多个实时信息采集与运算命令均需要访 问该 IPU内的共享资源时, 可以设置这多个实时信息采集与运算命令均使用一个 IC0 线程, 通过设置采集与运算命令的周期触发时刻来达到互斥访问共享资源的目的; 步骤 S104 , 接收用户输入的配置命令, 将该配置命令缓存在配置分发缓存器
(Config Distribution Buffer, CDB) 中; 其中, CDB的线程优先级 <IC0线程的优先 级; 步骤 S106, 在每一个 IPU内, 根据各个线程的优先级访问共享资源。 本实施例可以在系统设计阶段, 根据系统的关键模型分析及共享资源的分布情况 确定系统中 IPU的划分及 CP、 IC0的组成, 并划分好各单元中 CP和 IC0的优先级, 避免通过使用信号量在相应共享资源前加锁来限制访问, 相当于消除因共享资源访问 可能引起的故障源头, 还可以改良系统的架构设计, 从而提高系统的可靠性、 维护性 和易用性。 本实施例根据线程的优先级访问共享资源,规避多个线程共享资源时的访问冲突, 解决了因信号量加锁的方式容易导致系统挂起的问题,增强了系统的稳定性和可靠性。 其中,上述步骤 S106中的根据各个线程的优先级访问共享资源可以包括:在该实 时处理系统处于空闲状态时, 从 CDB 中读取配置命令; 根据配置命令确定对应 IPU 的 CP线程, 将该配置命令发送给确定的 CP线程; 该 CP线程根据该配置命令对共享 资源进行处理。 从 CDB中读取配置命令时可以按照先进先出(First In First Out, FIFO)原则读取。 为了便于确定每个配置命令对应的 CP线程,可以在 CDB中设置一个命令映射表, 将命令集与 CP 的对应关系保存在该映射表中。 基于此, 上述根据配置命令确定对应 IPU的 CP线程包括:根据配置命令查询命令映射表,该命令映射表中保存有命令集与 CP线程的对应关系; 根据查询的结构确定对应 IPU的 CP线程。 上述将配置命令发送给确定的 CP线程可以通过异步发送方式发送, 该异步发送 方式指发送完该配置命令后, 该 CDB即可以执行后面的操作。
ICO线程是执行实时信息采集与运算命令的, 因此, 其根据配置的时间周期决定 是否运行。 基于此, 上述根据各个线程的优先级访问共享资源还包括: 当 ICO线程的 运行时间到达时, 运行该 ICO线程访问共享资源。 为了避免 IPU内的线程在运行过程中出现挂起现象, 保证共享资源的互斥机制, 本实施例的 IPU的线程运行过程中禁止调用操作系统的延时操作。 为了实现简单, 在系统中划分 IPU时, 可以尽量保证多个 IPU之间的系统功能关 联性最小或者相互无关联, 例如, 使多个 IPU的功能各自独立, 尽量保证着多个 IPU 之间无共享数据信息。 每一个 IPU内各线程间的共享内存操作不需要采用任何机制进 行保护。 如果两个 IPU之间需要传输数据, 说明这两个 IPU之间有共享数据, 该情况下, 上述方法还包括: 发起数据传输的 IPU向接收数据的 IPU发送通告信息。 为了保证系 统的效率, 该通告信息可以采用异步传输方式。 进行通告时可以遵循下述原则: 如果 两个 IPU之间需要传输数据 (例如, 两个 IPU之间需要共同的配置信息或者两个 IPU 之间需要数据交互) 时, 为了避免出现共享资源的访问冲突, 上述方法还包括: 确定 发起数据传输的线程是否为 ICO线程, 如果是, 确定接收数据的线程是否为 CP线程; 如果是 CP线程, 则调整两个 IPU中的 ICO线程的优先级, 使第一 ICO线程的优先级 小于第二 ICO线程的优先级; 其中, 第一 ICO线程为发起数据传输的 IPU中的 ICO 线程, 第二 ICO线程为接收数据的 IPU中的 ICO线程。 为了描述方便,上述 CP线程有时也可以简称为 CP, ICO线程也可以简称为 ICO; 下面根据上述方法, 简单描述一下系统的设计与运行过程: 对于实时处理系统, 需要接收用户的配置和查询处理, 同时还需要进行一些数据 的实时采集和运算。 如图 2所示的实时处理系统结构示意图, 在该系统中设置一个优 先级较低的线程, 本实施例将该线程称为配置分发缓存器, 即上述 CDB, 该 CDB负 责接收输入系统的相关命令, 并分发给不同的独立处理单元 IPU进行处理。 将实时采集及运算的内容与用户配置命令集按照是否需要共享数据进行分类归 整, 分成若干 IPU, IPU内部的线程可以有直接的数据共享, IPU之间尽量保证不存在 共享数据需要访问。 在 IPU中, 配置命令处理 CP用高优先级线程完成处理, 即由 CP 负责接收处理外部对本 IPU的配置;相关的信息采集和运算 ICO由优先级相对低一些 的若干线程负责实时处理, 即 ICO执行本 IPU的实时处理 (要求 IPU内的 ICO之间 无共享资源, IPU内的 ICO与 CP可以共享资源)。 参见图 3, 本实施例的处理流程包括以下几个步骤: 步骤 S302, 启动系统中的线程, 包括: 1 ) 启动优先级为 M (M值对应的优先级较小) 的线程作为 CDB, 负责缓存配置 命令;
2) 启动各 IPU内部的 CP, 其优先级高于 M, 并将 CP处理的命令集与 CP的对 应关系注册到 CDB中的命令映射表 CmdMap中;
3 ) 启动各 IPU内部的 ICO, 其线程优先级要大于 M且小于其 IPU内部的 CP的 优先级; 步骤 S304, 接收用户向系统输入的配置命令, 将该配置命令缓存在 CDB中; 步骤 S306, ICO在信息采集和运算时, 对共享资源进行信息采集和运算;
ICO在信息采集和运算时, 因 ICO的优先级高于 CDB的优先级, CDB得不到调 度, 配置命令将在 CDB中缓存, 故对应的 CP不会得到执行, ICO可以安全地访问共 享资源。 步骤 S308, 当系统空闲时, 判断 CDB缓存的配置命令列表是否为空, 如果是, 则返回步骤 S304; 否则, 执行步骤 S310; 步骤 S310, CDB按 FIFO原则弹出缓存的配置命令, 查询命令映射表 CmdMap, 确定对应 IPU的 CP, 将命令异步发送到该 CP处理; 步骤 S312, CP根据命令配置完成对共享资源的处理, 返回步骤 S310继续处理缓 存的其他配置命令。 该系统中不同的 IPU之间, 由于其无数据共享问题, 故不需要考虑其线程间优先 级关系设置。 如果各 IPU之间需要共同的配置信息或者 IPU之间需要数据交互时,可通过在 IPU 之间以异步方式进行信息通告, 这时需要考虑不同场景下的限制条件, 当通告信息由 CP发起时, 接收通告的 IPU中的线程优先级不需要附加新的限制条件, 通告信息由 CP发起时的消息发送示意图如图 4所示: 这里称发送 IPU的 CP为 CP1,接收 IPU的 CP和 ICO分别为 CP2、 IC02。因 CDB 的作用, CP1和 CP2的调度关系是互斥的,故只需要考虑 CP1与 IC02的优先级关系。 线程优先级关系为 CP1>IC02, CP1执行不会被 IC02打断, 故没有问题; 线程优先级关系为 CP1<=IC02, IC02可能会打断 CP1的执行,但 CP1只在 IC02 空闲时才有机会将信息通告给 CP2或 IC02, 故 CP2和 IC02之间对共享资源的访问 没有问题; 当通告信息需要由 IPU的 ICO发起时, 即通告信息是实时运算的数据, 通告信息 由 ICO发起时的消息发送示意图如图 5所示: 这里称发送 IPU的 ICO为 IC01 , 接收 IPU的 CP和 ICO分别为 CP2、 IC02。 只 有当 IC01的线程优先级大于 IC02时, IC01会打断 IC02, 这时如果通告发给 CP2,
CP2的执行将可能引起共享资源访问的冲突。 因此,当 IPU需要由 ICO发送通告信息给接收 IPU的 CP时,可以设置发送端 ICO 的线程优先级小于接收端 ICO的线程优先级。而如果通告信息由接收 IPU的 ICO处理 时, 则没有问题。 同时可以看出, 当接收端将 ICO和 CP合二而一时, 则不受此处描 述的条件限制。 下面以实际通信系统中某单板嵌入式软件系统为例, 该系统框图如图 6所示, 主 控单元通过某通讯协议配置相关信息到此单板系统执行, 该单板实时检测处理告警性 能数据, 并上报主控单元, 同时单板系统还有业务协议相关的实时处理。 假设本系统 线程优先级的等级是值越大优先级越高。 共享资源访问方法包括以下步骤: 第一步: 启动单板系统的线程, 包括: 启动 CDB线程, 优先级为 2; 启动告警性能单元 AlmPerflPU的线程 AlmPerfCP (告警性能单元的配置命令处理 线程) 和 AlmPerflCO (告警性能单元的实时信息采集与运算线程), 其线程的优先级 分别为 5、 3, 将告警性能命令集与 AlmPerfCP的对应关系注册到 CDB中的命令映射 表 CmdMap中; 启动业务单元 ServicelPU相关的线程 ServiceCP (业务单元的配置命令处理线程) 和 ServicelCO (业务单元的实时信息采集与运算线程), 其优先级分别为 8、 6, 将处 理的告警性能命令集与 ServiceCP的对应关系注册到 CDB 中的命令映射表 CmdMap 中; 第二步: 用户向系统输入业务配置和告警性能配置命令, 该命令在 CDB中缓存; 第三步: 当 ServicelCO或 AlmPerflCO执行时,跳转第六步; 当系统空闲时, CDB 按 FIFO (First In First Out) 原则先将业务配置命令弹出, 查询命令映射表 CmdMap, 将该命令异步发送到 ServiceCP处理; 第四步: ServiceCP完成业务配置处理, 然后向 AlmPerflPU的 AlmPerfCP异步通 告相关的业务增删信息; 第五步: AlmPerfCP根据通告的业务增删信息, 完成与增删业务对应的告警性能 节点的处理; 第六步: CDB弹出告警性能配置命令, 将其异步发送到 AlmPerfCP处理; 第七步: AlmPerfCP根据告警性能配置命令进行处理; 第八步: ServicelCO周期查询业务实时信息, 执行业务协议的处理, 假设业务有 保护倒换发生, 这时异步通告倒换动作给 AlmPerflCO, AlmPerflCO完成告警性能检 测的切换; 第九步: 当 AlmPerflCO周期定时到, 查询告警性能信息, 执行告警性能的上报。 第十步: 返回第二步, 继续接收用户配置命令处理。 本实施例在系统设计阶段, 根据系统的关键模型分析及共享资源的分布情况就确 定好系统中 IPU的划分及 CP、 ICO的组成, 并划分好各单元的优先级, 避免再使用信 号量在相应共享资源前加锁来进行访问限制, 相当于消除因共享资源访问可能引起的 故障源头, 还可以改良系统的架构设计, 从而提高系统的可靠性、 维护性和易用性。 实施例 2 本实施例提供了一种实时处理系统, 参见图 7, 该系统包括以下模块: 线程启动模块 72, 设置为启动实时处理系统的线程; 其中, 实时处理系统包括多 个 IPU, 每个 IPU包括: CP线程和 ICO线程, IPU内的线程的优先级关系为: CP线 程的优先级 >ICO线程的优先级; 配置命令缓存模块 74, 与线程启动模块 72相连, 设置为接收用户输入的配置命 令, 将配置命令缓存在 CDB中; 其中, CDB的线程优先级 <ICO线程的优先级; 资源访问模块 76, 与配置命令缓存模块 74相连, 设置为在每一个 IPU内, 根据 各个线程的优先级访问共享资源。 本实施例根据线程的优先级访问共享资源,规避多个线程共享资源时的访问冲突, 解决了因信号量加锁的方式容易导致系统挂起的问题,增强了系统的稳定性和可靠性。 参见图 8, 资源访问模块 76包括: 配置命令读取单元 762, 设置为在实时处理系统处于空闲状态时, 从 CDB中读取 配置命令; 配置命令发送单元 764, 与配置命令读取单元 762相连, 设置为根据上述配置命 令确定对应 IPU的 CP线程, 将配置命令发送给确定的 CP线程; 处理单元 766, 与配置命令发送单元 764相连, 设置为通过 CP线程根据配置命令 对共享资源进行处理。 其中,配置命令读取单元 762从 CDB中读取配置命令时可以按照 FIFO原则读取。 为了便于确定每个配置命令对应的 CP线程,可以在 CDB中设置一个命令映射表, 将命令集与 CP 的对应关系保存在该映射表中。 基于此, 上述配置命令发送单元 764 根据配置命令确定对应 IPU的 CP线程包括: 根据配置命令查询命令映射表, 该命令 映射表中保存有命令集与 CP线程的对应关系; 根据查询的结构确定对应 IPU的 CP 线程。 上述配置命令发送单元 764将配置命令发送给确定的 CP线程可以通过异步发送 方式发送, 该异步发送方式指发送完该配置命令后, 该 CDB即可以执行后面的操作。 上述 ICO线程是执行实时信息采集与运算命令的, 因此, 其根据配置的时间周期 决定是否运行。 基于此, 上述资源访问模块 76还包括: ICO线程访问单元, 设置为当 ICO线程的运行时间到达时, 运行该 ICO线程访问共享资源。 为了避免 IPU内的线程在运行过程中出现挂起现象, 保证共享资源的互斥机制, 本实施例的 IPU的线程运行过程中禁止调用操作系统的延时操作。 为了实现简单, 在系统中划分 IPU时, 可以尽量保证多个 IPU之间的系统功能关 联性最小或者相互无关联, 例如, 使多个 IPU的功能独立, 保证这多个 IPU之间无共 享数据。 如果两个 IPU之间需要传输数据 (例如, 两个 IPU之间需要共同的配置信息或者 两个 IPU之间需要数据交互), 则发起数据传输的 IPU向接收数据的 IPU发送通告信 息。 为了避免出现共享资源的访问冲突, 上述系统还包括: 第一确定模块, 用于确定 发起数据传输的线程是否为 ICO线程; 第二确定模块, 用于第一确定模块的确定结果 为是时, 确定接收数据的线程是否为 CP线程; 优先级调整模块, 用于第二确定模块 的确定结果为 CP线程时, 调整两个 IPU中的 ICO线程的优先级, 使第一 ICO线程的 优先级小于第二 ICO线程的优先级;其中,第一 ICO线程为发起数据传输的 IPU中的 ICO线程, 第二 ICO线程为接收数据的 IPU中的 ICO线程。 本实施例根据线程的优先级访问共享资源,规避多个线程共享资源时的访问冲突, 解决了因信号量加锁的方式容易导致系统挂起的问题,增强了系统的稳定性和可靠性。 从以上的描述中可以看出, 以上实施例可以在系统设计阶段, 根据系统的关键模 型分析及共享资源的分布情况确定系统中 IPU的划分及 CP、 ICO的组成,并划分好各 单元中 CP和 ICO的优先级,避免通过使用信号量在相应共享资源前加锁来限制访问, 相当于消除因共享资源访问可能引起的故障源头, 还可以改良系统的架构设计, 从而 提高系统的可靠性、 维护性和易用性。 显然, 本领域的技术人员应该明白, 上述的本发明的各模块或各步骤可以用通用 的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布在多个计算装置所 组成的网络上, 可选地, 它们可以用计算装置可执行的程序代码来实现, 从而, 可以 将它们存储在存储装置中由计算装置来执行, 并且在某些情况下, 可以以不同于此处 的顺序执行所示出或描述的步骤, 或者将它们分别制作成各个集成电路模块, 或者将 它们中的多个模块或步骤制作成单个集成电路模块来实现。 这样, 本发明不限制于任 何特定的硬件和软件结合。 以上所述仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本领域的技 术人员来说, 本发明可以有各种更改和变化。 凡在本发明的精神和原则之内, 所作的 任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。

Claims

1. 一种实时处理系统中的共享资源访问方法, 包括:
启动实时处理系统的线程; 其中, 所述实时处理系统包括多个独立处理单 元 IPU, 每个 IPU包括: 配置命令处理 CP线程和实时信息采集与运算 ICO线 程, 所述 IPU内的线程的优先级关系为: 所述 CP线程的优先级 >所述 ICO线 程的优先级;
接收用户输入的配置命令, 将所述配置命令缓存在配置分发缓存器 CDB 中; 其中, 所述 CDB的线程优先级 <所述 ICO线程的优先级;
在每一个 IPU内, 根据各个线程的优先级访问共享资源。
2. 根据权利要求 1所述的方法, 其中, 所述根据各个线程的优先级访问共享资源 包括:
在所述实时处理系统处于空闲状态时, 从所述 CDB中读取所述配置命令; 根据所述配置命令确定对应 IPU的 CP线程, 将所述配置命令发送给确定 的所述 CP线程;
所述 CP线程根据配置命令对共享资源进行处理。
3. 根据权利要求 2所述的方法, 其中, 从所述 CDB中读取所述配置命令时按照 先进先出 FIFO原则读取。
4. 根据权利要求 2所述的方法,其中,所述根据所述配置命令确定对应 IPU的 CP 线程包括:
根据所述配置命令查询命令映射表,所述命令映射表中保存有命令集与 CP 线程的对应关系;
根据查询的结构确定对应 IPU的 CP线程。
5. 根据权利要求 2所述的方法, 其中, 将所述配置命令发送给确定的所述 CP线 程是通过异步发送方式发送的。
6. 根据权利要求 1所述的方法, 其中, 所述根据各个线程的优先级访问共享资源 包括:
当所述 ICO线程的运行时间到达时, 运行所述 ICO线程访问共享资源
7. 根据权利要求 1-6任一项所述的方法, 其中, 所述 IPU的线程运行过程中禁止 调用操作系统的延时操作。
8. 根据权利要求 1-6任一项所述的方法, 其中, 所述多个 IPU的功能独立。
9. 根据权利要求 1-6任一项所述的方法, 其中, 所述每一个 IPU内各线程间的共 享内存操作不需要采用任何机制进行保护。
10. 根据权利要求 1所述的方法, 其中, 如果两个 IPU之间需要传输数据, 所述方 法还包括:
发起数据传输的 IPU向接收数据的 IPU发送通告信息。
11. 根据权利要求 10所述的方法, 其中, 所述发起数据传输的 IPU向接收数据的 IPU发送通告信息包括:
确定发起数据传输的线程是否为 ICO线程, 如果是, 确定接收所述数据的 线程是否为 CP线程;
如果是 CP线程,则调整所述两个 IPU中的 ICO线程的优先级,使第一 ICO 线程的优先级小于第二 ICO线程的优先级;其中,所述第一 ICO线程为发起数 据传输的 IPU中的 ICO线程, 所述第二 ICO线程为接收数据的 IPU中的 ICO 线程。
12. 一种实时处理系统, 包括:
线程启动模块, 设置为启动实时处理系统的线程; 其中, 所述实时处理系 统包括多个独立处理单元 IPU, 每个 IPU包括: 配置命令处理 CP线程和实时 信息采集与运算 ICO线程, 所述 IPU内的线程的优先级关系为: 所述 CP线程 的优先级>所述 ICO线程的优先级;
配置命令缓存模块, 设置为接收用户输入的配置命令, 将所述配置命令缓 存在配置分发缓存器 CDB中; 其中, 所述 CDB的线程优先级 <所述 ICO线程 的优先级;
资源访问模块, 设置为在每一个 IPU内, 根据各个线程的优先级访问共享 资源。
13. 根据权利要求 12所述的系统, 其中, 所述资源访问模块包括:
配置命令读取单元, 设置为在所述实时处理系统处于空闲状态时, 从所述
CDB中读取所述配置命令; 配置命令发送单元, 设置为根据所述配置命令确定对应 IPU的 CP线程, 得所述配置命令发送给确定的所述 CP线程;
处理单元, 设置为通过所述 CP线程根据配置命令对共享资源进行处理。
PCT/CN2012/073555 2011-06-14 2012-04-05 实时处理系统中的共享资源访问方法和实时处理系统 WO2012171398A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110159272.2 2011-06-14
CN201110159272.2A CN102831007B (zh) 2011-06-14 2011-06-14 实时处理系统中的共享资源访问方法和实时处理系统

Publications (1)

Publication Number Publication Date
WO2012171398A1 true WO2012171398A1 (zh) 2012-12-20

Family

ID=47334156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/073555 WO2012171398A1 (zh) 2011-06-14 2012-04-05 实时处理系统中的共享资源访问方法和实时处理系统

Country Status (2)

Country Link
CN (1) CN102831007B (zh)
WO (1) WO2012171398A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631568A (zh) * 2013-12-20 2014-03-12 厦门大学 面向医学图像的多线程并行计算方法
CN104820622B (zh) * 2015-05-22 2019-07-12 上海斐讯数据通信技术有限公司 一种共享内存锁管理控制方法及系统
CN105930134B (zh) * 2016-04-20 2018-10-23 同光科技有限公司 一种仪表指令处理方法、处理器及仪表
CN110147269B (zh) * 2019-05-09 2023-06-13 腾讯科技(上海)有限公司 一种事件处理方法、装置、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0783152A2 (en) * 1996-01-04 1997-07-09 Sun Microsystems, Inc. Method and apparatus for automatically managing concurrent access to a shared resource in a multi-threaded programming environment
CN1615472A (zh) * 2002-01-24 2005-05-11 皇家飞利浦电子股份有限公司 在多处理环境中执行进程
CN1755636A (zh) * 2004-09-30 2006-04-05 国际商业机器公司 用于在实时与虚拟化操作系统之间共享资源的系统和方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6490611B1 (en) * 1999-01-28 2002-12-03 Mitsubishi Electric Research Laboratories, Inc. User level scheduling of inter-communicating real-time tasks
CN100442709C (zh) * 2005-06-17 2008-12-10 华为技术有限公司 一种网络管理系统中的设备操作方法
CN101673223B (zh) * 2009-10-22 2012-03-21 同济大学 基于片上多处理器的线程调度实现方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0783152A2 (en) * 1996-01-04 1997-07-09 Sun Microsystems, Inc. Method and apparatus for automatically managing concurrent access to a shared resource in a multi-threaded programming environment
CN1615472A (zh) * 2002-01-24 2005-05-11 皇家飞利浦电子股份有限公司 在多处理环境中执行进程
CN1755636A (zh) * 2004-09-30 2006-04-05 国际商业机器公司 用于在实时与虚拟化操作系统之间共享资源的系统和方法

Also Published As

Publication number Publication date
CN102831007B (zh) 2017-04-12
CN102831007A (zh) 2012-12-19

Similar Documents

Publication Publication Date Title
EP2645674B1 (en) Interrupt management
KR101951072B1 (ko) 코어 간 통신 장치 및 방법
US10884786B2 (en) Switch device, switching method, and computer program product
TWI479850B (zh) 單數據機板之改良式多胞元支援方法及系統
TWI257575B (en) Method of managing power state transitions, and associated apparatus and system thereof
US20220400028A1 (en) Operation control method and device, household electrical appliance, and storage medium
CN100504791C (zh) 多cpu对临界资源进行互斥访问的方法和装置
US10686890B2 (en) Keep-alive scheduler in a network device
US20060010446A1 (en) Method and system for concurrent execution of multiple kernels
CA2536037A1 (en) Fast and memory protected asynchronous message scheme in a multi-process and multi-thread environment
JP2015537307A (ja) コンポーネント指向ハイブリッドクラウドオペレーティングシステムのアーキテクチャ及びその通信方法
JPH06202883A (ja) プロセス間通信装置及び通信方法
EP1474746A1 (en) Management of message queues
WO2014103144A1 (ja) インタフェース装置、およびメモリバスシステム
JP2005267118A (ja) シングルプロセッサ向けosによる並列処理システムにおけるプロセッサ間通信システム及びプログラム
US20140068165A1 (en) Splitting a real-time thread between the user and kernel space
WO2012171398A1 (zh) 实时处理系统中的共享资源访问方法和实时处理系统
US9569264B2 (en) Multi-core system for processing data packets
WO2019080852A1 (zh) 一种数据处理方法、数据处理电路和网络设备
WO2013097098A1 (zh) 数据处理方法、图形处理器gpu及第一节点设备
WO2011131010A1 (zh) 定时方法和装置
JP2001282558A (ja) マルチオペレーティング計算機システム
US20070288646A1 (en) Communication interface device and communication method
WO2012065432A1 (zh) 多核系统中定时器的实现方法及多核系统
CN100426241C (zh) 一种面向服务体系结构中消息层软中断处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12799785

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12799785

Country of ref document: EP

Kind code of ref document: A1