WO2011020281A1 - 一种高效的内存池访问方法 - Google Patents

一种高效的内存池访问方法 Download PDF

Info

Publication number
WO2011020281A1
WO2011020281A1 PCT/CN2009/076336 CN2009076336W WO2011020281A1 WO 2011020281 A1 WO2011020281 A1 WO 2011020281A1 CN 2009076336 W CN2009076336 W CN 2009076336W WO 2011020281 A1 WO2011020281 A1 WO 2011020281A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
memory
write
read
pointer
Prior art date
Application number
PCT/CN2009/076336
Other languages
English (en)
French (fr)
Inventor
刘骁
Original Assignee
深圳市融创天下科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市融创天下科技发展有限公司 filed Critical 深圳市融创天下科技发展有限公司
Publication of WO2011020281A1 publication Critical patent/WO2011020281A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Definitions

  • the present invention relates to the field of data processing, and in particular to an efficient memory pool access method.
  • a process is like a big container. After the application is run, it is equivalent to loading the application into the container. You can add other things to the container (such as: the variable data required by the application to run, the DLL file to be referenced, etc.). After the program is run twice, the contents of the container will not be dumped, and the system will find a new process container to hold it.
  • the process consists of three parts: the process control block, the program segment, and the data segment.
  • a process can contain several threads (ThreSld). Threads can help the application do several things at the same time (such as one thread writing files to disk and the other receiving user's key operations and reacting without disturbing each other).
  • Threads can help the application do several things at the same time (such as one thread writing files to disk and the other receiving user's key operations and reacting without disturbing each other).
  • Threads can help the application do several things at the same time (such as one thread writing files to disk and the other receiving user's key operations and reacting without disturbing each other).
  • Threads can help the application do several things at the same time (such as one thread writing files to disk and the other receiving user's key operations and reacting without disturbing each other).
  • Threads can add or delete related threads as needed. It is a program that can be executed concurrently.
  • the running process on a data set is an independent unit of the system for resource allocation and scheduling. It is also called activity
  • a process is the basis of the operating system structure; it is a program that is being executed; a program instance that is running on the computer; an entity that can be assigned to and executed by the processor; a single-order execution display, a current state The active unit described by a set of related system resources.
  • a process is a running instance of an application and is a dynamic execution of the application. We can simply understand: It is the executive program that the operating system is currently running.
  • the execution program currently running in the system includes: the system management computer entity and the programs necessary for completing various operations; the additional programs that the user opens and executes, and of course the illegal programs that the user does not know and automatically run (they are possible) Is a virus program).
  • a process is an execution activity of a program on a computer. When you run a program, you start it. Processes. Obviously, the program is dead (static) and the process is alive (dynamic). Processes can be divided into system processes and user processes. The processes that are used to complete the various functions of the operating system are the system processes, which are the operating systems themselves in the running state; the user processes are all the processes started by you. A process is a unit in which an operating system allocates resources.
  • a thread is an entity in a process. It is the basic unit that is independently scheduled and dispatched by the system. The thread itself does not own system resources. It only has a certain resource that is indispensable in operation, but it can be shared with other processes belonging to the same process. The thread shares all the resources owned by the process. One thread can create and revoke another thread, and multiple threads in the same process can execute concurrently. Due to the mutual constraints between threads, threads are rendered intermittent during operation. Threads also have three basic states of ready, blocked, and running.
  • a thread is a single sequential control process in a program. Running multiple threads in a single program does a different job, called multithreading.
  • Threads and processes differ in that the child and parent processes have different code and data spaces, while multiple threads share the data space, and each thread has its own execution stack and program counters for its execution context. Multi-threading is mainly to save CPU time and use it, depending on the specific situation. The memory resources and CPU of the computer are required to run the thread.
  • the kernel supports multi-thread preemption.
  • the common way to access the same memory pool in two threads is to add a mutex to prevent data access errors. If two threads access the memory pool very frequently, the system consumption due to mutual exclusion is very high. . Tests have shown that performance will be reduced by a factor of five. After a period of research and research, the mutex can now be removed, saving additional system consumption. This will improve the performance of the mobile streaming system.
  • the present invention provides an efficient memory pool access method, and the specific steps of the method are as follows:
  • the memory pool is composed of a plurality of memory blocks, and starting a multi-threaded application, first ensuring that the two threads occupying the memory read and write directions are the same;
  • Figure 1 is a schematic flow chart of the method of the present invention.
  • FIG. 2 is a schematic diagram of a first thread read pointer accessing a second thread write pointer in the method embodiment of the present invention
  • FIG. 3 is a first thread write pointer accessing a second thread read pointer in the method embodiment of the present invention; schematic diagram.
  • FIG. 1 is a block diagram of the method of the present invention, and the specific steps are as follows:
  • S1 Establishing a memory pool, the memory pool is composed of a plurality of memory blocks, and starting a multi-threaded application, first ensuring that the two threads occupying the memory read and write directions are the same;
  • a memory pool consisting of memory blocks 10, 11, 12, 13, 14, and 15 in the form of a linked list is created. Each memory block is 64 kb in size. It is known to form a memory pool in the form of a linked list, but The invention is not limited.
  • w represents the write pointer of the second thread
  • r represents the read pointer of the first thread.
  • the size of one packet is a period. For example, if the second thread writes a packet with a size of 10 kb bytes, the second thread does not write the data. Waiting for the first thread to read the data packet of size 10kb bytes and then writing the data, and 20 in the figure indicates the moving direction of the w pointer and the r pointer in the linked list, in order to distinguish it from the prior art by adding mutual exclusion
  • the lock gives a thread with access to the memory. After the second thread first writes a complete packet, the change of the write pointer is the last instruction on the assembly layer.
  • the start of the write pointer w coincides with the read finger.
  • the space used by the memory block actually moves to the position indicated by w', but also needs to pass through an assembly layer.
  • the instruction moves the pointer w to w', and also assigns the pointer value of w' to w. Since the pointer w is changed to w', the instruction on the assembly layer is an atomic operation, and step S2 can ensure that the write operation is not interrupted.
  • the first thread starts reading data, and the read pointer r of the first thread moves from the start position to the w' position.
  • the first thread reads a data packet, that is, r moves from the start position to the w' position.
  • the reading of the first thread stops, the second thread begins to write another packet, and the change of the write pointer after writing is moved from w' to w" by an assembly instruction, that is, the w" pointer value is assigned to the second thread.
  • Read pointer w then the first thread starts reading the second packet, and so on. Write and read here are the same size packets.
  • the write pointer of the first thread accesses the read pointer of the second thread in step S3, which is exactly the reverse of step S2, that is, after the second thread completes the read operation, the first thread is in the same
  • the process of writing a shared memory is different from step S2.
  • the second thread reads a data packet, the memory space occupied by the data packet is released.
  • step 3 is from the start position w ( r) that is, the w pointer of the first thread and the r pointer of the second thread are moved to the r' position, and the second thread reads a data packet, and r is changed to r', and an assembly instruction is passed, so the read operation is not Will be interrupted, when the space of a packet is released, the first thread writes in the freed memory space. Since the size of the data written by the first thread write operation is unknown, the data written may be more than the second. The space released by the thread is large, and may be smaller than the free space. This is different from the process of reading first and then reading in step S2. It is necessary to make improvements.
  • the first thread will write the block where the data packet is located and the memory block where the second thread read pointer is located is the memory block 10, then this ⁇ First, the data is written in the space that has been released in the memory block 10, and then a new memory block 16 is inserted between the memory blocks 10 and 11, and the data that cannot be written in the memory block 10 continues to be written in the new memory block 16.
  • the second thread read operation is not affected, it still reads the data in the memory block 11, 12, 13, 14, 15 in order after reading the data of the memory block 10 in the linked list order, so the first one can be ensured
  • the write pointer of the thread is different from the memory block where the read pointer of the second thread is located.
  • the second thread may also be designed to perform a write operation after the second thread reads at least the first memory block, for example, after the second thread reads the data of the memory block 10, the first thread It is only started to write data from the space released by the memory block 10, so that it is possible to ensure that the first thread write data is completed without error by not applying for a new memory block.
  • the present invention removes the mutex and saves additional system consumption.
  • the most important aspect of embedded systems is the improvement of resources and performance. Peer is also a key factor in judging a high-quality product. The above two points reflect the performance improvement and CPU resource savings. Applying the method of the present invention to the mobile streaming system will improve the performance of the mobile streaming system.
  • the copy operation is a circular access memory or cache for the CPU, ensuring that the address boundary alignment can maximize the potential of CPU data access, thereby improving the block copy performance, based on 32 bits (data bus with 32 bits)
  • an RS1M access can reach 32 bits (word size), and because 32-bit data is accessed to ensure 32-bit address boundary alignment, the accessed data will not be subjected to additional register logic due to positional problems. operating. This can further increase the speed of the copy operation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)
  • Debugging And Monitoring (AREA)

Description

Title of Invention:一种高效的内存池访问方法 技术领域
[1] 本发明涉及数据处理领域, 具体地说, 涉及一种高效的内存池访问方法。
背景技术
[2] 对应用程序来说, 进程就像一个大容器。 在应用程序被运行后, 就相当于将应 用程序装进容器里了, 你可以往容器里加其他东西(如:应用程序在运行吋所需 的变量数据、 需要引用的 DLL文件等), 当应用程序被运行两次吋, 容器里的 东西并不会被倒掉, 系统会找一个新的进程容器来容纳它。
[3] 进程是由进程控制块、 程序段、 数据段三部分组成。 一个进程可以包含若干 线程 (ThreSld), 线程可以帮助应用程序同吋做几件事 (比如一个线程向磁盘写 入文件, 另一个则接收用户的按键操作并及吋做出反应, 互相不干扰), 在程序 被运行后中, 系统首先要做的就是为该程序进程建立一个默认线程, 然后程序 可以根据需要自行添加或删除相关的线程。 是可并发执行的程序。 在一个数据 集合上的运行过程, 是系统进行资源分配和调度的一个独立单位, 也是称活动 、 路径或任务, 它有两方面性质: 活动性、 并发性。 进程可以划分为运行、 阻 塞、 就绪三种状态, 并随一定条件而相互转化: 就绪-运行, 运行-阻塞, 阻 塞 --就绪。
[4] 进程是操作系统结构的基础; 是一个正在执行的程序; 计算机中正在运行的 程序实例; 可以分配给处理器并由处理器执行的一个实体; 由单一顺序的执行 显示, 一个当前状态和一组相关的系统资源所描述的活动单元。
[5] 进程为应用程序的运行实例, 是应用程序的一次动态执行。 我们可以简单地 理解为: 它是操作系统当前运行的执行程序。 在系统当前运行的执行程序里包 括: 系统管理计算机个体和完成各种操作所必需的程序; 用户开启、 执行的额 外程序, 当然也包括用户不知道, 而自动运行的非法程序 (它们就有可能是病 毒程序) 。
进程是程序在计算机上的一次执行活动。 当你运行一个程序, 你就启动了一 个进程。 显然, 程序是死的(静态的), 进程是活的(动态的)。 进程可以分为 系统进程和用户进程。 凡是用于完成操作系统的各种功能的进程就是系统进程 , 它们就是处于运行状态下的操作系统本身; 用户进程就是所有由你启动的进 程。 进程是操作系统进行资源分配的单位。
[7] 线程是进程中的一个实体, 是被系统独立调度和分派的基本单位, 线程自己不 拥有系统资源, 只拥有一点在运行中必不可少的资源, 但它可与同属一个进程 的其它线程共享进程所拥有的全部资源。 一个线程可以创建和撤消另一个线程 , 同一进程中的多个线程之间可以并发执行。 由于线程之间的相互制约, 致使 线程在运行中呈现出间断性。 线程也有就绪、 阻塞和运行三种基本状态。
[8] 线程是程序中一个单一的顺序控制流程。 在单个程序中同吋运行多个线程完 成不同的工作,称为多线程。
[9] 线程和进程的区别在于,子进程和父进程有不同的代码和数据空间,而多个线 程则共享数据空间,每个线程有自己的执行堆栈和程序计数器为其执行上下文。 多线程主要是为了节约 CPU吋间, 发挥利用, 根据具体情况而定。 线程的运行 中需要使用计算机的内存资源和 CPU。
[10] 目前嵌入式操作系统的一个特点: 内核支持多线程抢占。 在两个线程访问同一 个内存池吋通俗的方法是加入互斥锁来防止数据访问出错, 如果两个线程对内 存池的访问频率极高, 那么因为互斥带来的系统消耗是很高的。 测试表明会降 低 5倍性能, 经过一段吋间的考究和钻研, 目前可以把互斥锁去掉, 节约了额 外的系统消耗。 这将提高手机流媒体系统的性能。
[11] 另外, 内存拷贝通常既频繁又量大, 如果源内存块和目标内存块的选择很随意
, 将会造成拷贝性能大大降低。
[12] 鉴于此, 实有必要提出一种改进的方法以克服现有技术的缺陷。
发明内容
[13] 本发明提供一种高效的内存池访问方法, 该方法具体步骤如下:
[14] S1.建立内存池, 所述内存池由若干内存块组成, 启动一个具多线程的应用程 序, 首先保证占用内存的两线程读写方向相同;
[15] S2.当第一线程去访问第二线程的当前块内写指针吋, 一定要确保第一线程对 当前块内写指针的改变在汇编层上是最后一条指令, 也就是此指令执行后, 后 面的指令将不会在有任何与内存块相关的写操作了, 即要保证第一线程写完数 据后, 再让第二线程来读取该数据;
[16] S3.当第一线程访问第二线程的读指针吋, 第二线程一定要确保第二线程对当 前块内读指针的改变在汇编层上是最后一条指令, 也就是此指令执行后, 后面 的指令将不会在有任何与内存块相关的读操作了。
[17] 本发明的有益效果在于, 测试表明把互斥锁去掉, 节约了额外的系统消耗, 这 将提高手机流媒体系统的性能。 嵌入式系统最重要的就是资源和性能的提高, 同吋也是判断一个高质量产品的关键因素, 以上两点体现了针对性能的提高和 C PU资源的节省。
附图说明
[18] 图 1为本发明方法流程示意图;
[19] 图 2为本发明方法实施例中第一线程读指针访问第二线程写指针的示意图; [20] 图 3为本发明方法实施例中第一线程写指针访问第二线程读指针的示意图。
具体实施方式
[21] 下面结合附图来说明本发明具体实施。
[22] 如图 1所示为本发明方法流程框图, 具体步骤如下:
[23] S1:建立内存池, 所述内存池由若干内存块组成, 启动一个具多线程的应用程 序, 首先保证占用内存的两线程读写方向相同;
[24] S2:当第一线程读指针去访问第二线程的当前块内写指针吋, 一定要确保第一 线程对当前块内写指针的改变在汇编层上是最后一条指令, 也就是此指令执行 后, 后面的指令将不会在有任何与内存块相关的写操作了, 即要保证第一线程 写完数据后, 再让第二线程来读取该数据;
[25] S3:当第一线程访问第二线程的读指针吋, 第二线程一定要确保第二线程对当 前块内读指针的改变在汇编层上是最后一条指令, 也就是此指令执行后, 后面 的指令将不会在有任何与内存块相关的读操作了。
[26] 请参照图 2, 具体地说明步骤 S2中第一线程的读指针访问第二线程的写指针, 也即当第二线程完成写操作后, 第一线程在于相同的共享内存进行读操作的过 程, 首先建立由内存块 10、 11、 12、 13、 14以及 15以链表的形式组成的内存池 , 每一内存块大小为 64kb, 在这里以链表的形式组成内存池是公知的, 但并不 对本发明造成限制, 图中 w表示第二线程的写指针, r表示第一线程的读指针, 在这里相当于第二线程先写入数据后, 第一线程再进行读数据, 第二线程每次 写数据以及第一线程每次读数据都是以一个数据包的大小为周期, 例如, 首先 第二线程写入一个大小为 10kb字节的数据包, 则第二线程不写入数据, 等待第 一线程将这个大小为 10kb字节的数据包读完再进行写入数据, 另外图中 20表示 在链表中 w指针以及 r指针的移动方向, 为了区别于现有技术中通过加入互斥锁 来给定某一线程具有对内存的访问权限, 本方法在第二线程先写入一个完整的 数据包后, 写指针的改变在汇编层上是最后一条指令, 例如, 开始吋写指针 w与 读指重合, 首先第二线程在内存块 10写入一个数据包后, 内存块使用的空间实 际移到了 w'所指示的位置, 但还需通过一条汇编层上的指令将指针 w移动至 w', 也即将 w'的指针值赋于 w, 由于指针 w改变至 w'是一个汇编层上的指令即为原子 操作, 步骤 S2可以保证写操作不会被中断, 此后第一线程开始读数据, 第一线 程的读指针 r从开始位置移至 w'位置, 同理, 当第一线程的读完一个数据包, 即 r 从开始位置移至 w'位置, 第一线程的读取动停止, 第二线程开始写入另一个数 据包, 写完后写指针的改变通过一条汇编指令从 w'移至 w",即将 w"指针值赋给 第二线程的读指针 w, 接着第一线程开始读第二个数据包, 以此类推。 在这里写 和读都是相同大小的数据包。
请参照图 3, 具体地说明步骤 S3中第一线程的写指针访问第二线程的读指针, 此过程恰好与步骤 S2相反, 也即当第二线程完成读操作后, 第一线程在于相同 的共享内存进行写操作的过程, 与步骤 S2不同的是, 第二线程读完一个数据包 后即释放该数据包占有的内存空间, 例如图 3中第二线程的读指针 w从开始位置 w (r)即第一线程的 w指针与第二线程的 r指针重合位置移动至 r'位置, 代表第二线程 读完一个数据包, 而 r改变为 r'也是通过一条汇编指令, 因此读操作不会被中断 , 当释放一个数据包的空间后, 第一线程在被释放的内存空间进行写操作, 由 于第一线程写操作写入的数据大小为未知的, 写入的数据有可能比第二线程释 放的空间大, 也可能比释放的空间小, 这不同于步骤 S2中先写后读的过程, 因 此有必要作出改进, 如果第一线程要写入的数据比第二线程释放的空间大则造 成无法写入, 因此本方案中可以设计在第一线程写入数据包吋先判断当前写入 的内存块是否是第二线程读指针所在的内存块, 如图 3中所示, 第一线程将写入 数据包所在的块以及第二线程读指针所在的内存块均为内存块 10, 则此吋首先 在内存块 10中已经释放的空间写入数据, 再申请一个新的内存块 16插入于内存 块 10与 11之间, 在内存块 10写不下的数据则继续在新内存块 16中写入, 对于第 二线程读操作则不受影响, 其仍然按照链表顺序在读完内存块 10的数据后依次 读取内存块 11、 12、 13、 14、 15中的数据, 因此可以将确保第一线程的写指针 不同于第二线程的读指针所在的内存块。
[28] 另外, 也可以设计成第二线程至少读完第一个内存块后, 所述第二线程才开始 进行写操作, 例如在第二线程读完内存块 10的数据后, 第一线程才开始从内存 块 10释放的空间开始写入数据, 这样可以不通过申请新的内存块, 也可以保证 第一线程写入数据保证完成而不出错。
[29] 本发明把互斥锁去掉, 节约了额外的系统消耗。 嵌入式系统最重要的就是资源 和性能的提高, 同吋也是判断一个高质量产品的关键因素, 以上两点体现了针 对性能的提高和 CPU资源的节省。 将本发明方法应用这手机流媒体系统, 将在提 高了手机流媒体系统的性能。
[30] 通常进行拷贝操作对 CPU来说就是一个循环访问内存或缓存, 保证好地址边界 对齐可以发挥 CPU数据访问的最大潜力, 从而提高块拷贝性能, 基于 32位 (数据 总线带款 32位) 的 CPU系统上, 一次 RS1M访问可以达到 32位 (字大小) , 又因 为访问 32位的数据要保证 32位地址边界对齐, 才不会使访问到的数据因位置问 题而要进行额外的寄存器逻辑操作。 这样可以进一步提高拷贝操作的速度。
[31] 以上所述仅为本发明的较佳实施例而已, 并不用以限制本发明, 凡在本发明的 精神和原则之内所作的任何修改、 等同替换和改进等, 均应包含在本发明的保 护范围之内。

Claims

权利要求书
一种高效的内存池访问方法, 其特征在于, 该方法具体步骤如下
S1:建立内存池, 所述内存池由若干内存块组成, 启动一个具多线 程的应用程序, 首先保证占用内存的两线程读写方向相同;
S2:当第一线程去访问第二线程的当前块内写指针吋, 一定要确保 第一线程对当前块内写指针的改变在汇编层上是最后一条指令, 也就是此指令执行后, 后面的指令将不会在有任何与内存块相关 的写操作了, 即要保证第一线程写完数据后, 再让第二线程来读 取该数据;
S3:当第一线程访问第二线程的读指针吋, 第二线程一定要确保第 二线程对当前块内读指针的改变在汇编层上是最后一条指令, 也 就是此指令执行后, 后面的指令将不会在有任何与内存块相关的 读操作了。
如权利要求 1所述的高效的内存池访问方法, 其特征在于, 所述保 证两线程读写方向相同通过循环链表的建立可以定向。
如权利要求 1所述的高效的内存池访问方法, 其特征在于, 所述步 骤 S2与 S3中汇编层上是最后一条指令即为原子操作, 即在步骤 S2 可以保证写操作不会被中断, 以及可以保证在步骤 S3中的读操作 不会被中断。
如权利要求 2所述的高效的内存池访问方法, 其特征在于, 所述步 骤 S3中确保第一线程的写指针不同于第二线程的读指针所在的内 存块。
如权利要求 2所述的高效的内存池访问方法, 其特征在于, 所述步 骤 S3中第一线程至少读完第一个内存块后, 所述第二线程才开始 进行写操作。
如权利要求 5所述的高效的内存池访问方法, 其特征在于, 所述步 骤 S3中若判断出第一线程的写指针将写入第二线程的读指针所在 的内存块吋, 则申请一个新空闲内存块插入于第一线程的写指针 所在的内存块以及与该内存块相邻的下一内存块之间。
如权利要求 1所述的高效的内存池访问方法, 其特征在于, 每一内 存块大小为 64kb。
PCT/CN2009/076336 2009-08-18 2009-12-31 一种高效的内存池访问方法 WO2011020281A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN 200910109444 CN101630276B (zh) 2009-08-18 2009-08-18 一种高效的内存访问方法
CN200910109444.8 2009-08-18

Publications (1)

Publication Number Publication Date
WO2011020281A1 true WO2011020281A1 (zh) 2011-02-24

Family

ID=41575394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/076336 WO2011020281A1 (zh) 2009-08-18 2009-12-31 一种高效的内存池访问方法

Country Status (2)

Country Link
CN (1) CN101630276B (zh)
WO (1) WO2011020281A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833440B (zh) * 2010-04-30 2013-01-02 西安交通大学 编译器支持下的推测多线程内存数据同步执行方法及装置
CN102129396B (zh) * 2011-03-04 2013-07-10 中国科学院软件研究所 一种实时快速的线程间数据交换方法
CN102723086B (zh) * 2011-05-05 2017-04-12 新奥特(北京)视频技术有限公司 一种智能图文动画更新播放的方法
CN102821302B (zh) * 2012-07-25 2016-03-09 苏州科达科技股份有限公司 视频下载方法、网关设备及网络视频监控系统
CN110764710B (zh) * 2016-01-30 2023-08-11 北京忆恒创源科技股份有限公司 低延迟高iops的数据访问方法与存储系统
CN108228235B (zh) * 2016-12-21 2020-11-13 龙芯中科技术有限公司 基于mips架构的数据操作处理方法和装置
CN108345811B (zh) * 2017-01-23 2021-07-23 杭州爱钥医疗健康科技有限公司 射频干扰抑制方法及装置
CN107329807B (zh) * 2017-06-29 2020-06-30 北京京东尚科信息技术有限公司 数据延迟处理方法和装置、计算机可读存储介质
CN109032798B (zh) * 2018-07-25 2022-03-18 天津凯发电气股份有限公司 一种电能质量管理系统共享内存锁控制方法
CN110515868A (zh) * 2019-08-09 2019-11-29 苏州浪潮智能科技有限公司 显示图像的方法和装置
CN110673952B (zh) * 2019-09-04 2023-01-10 苏州浪潮智能科技有限公司 一种面向高并发读应用的数据处理方法及装置
CN111381887B (zh) * 2020-03-18 2023-05-09 深圳中微电科技有限公司 在mvp处理器中进行图像运动补偿的方法、装置及处理器

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001061471A2 (en) * 2000-02-16 2001-08-23 Sun Microsystems, Inc. An implementation for nonblocking memory allocation
CN101317155A (zh) * 2005-12-27 2008-12-03 英特尔公司 本地用户级线程数据的数据结构和管理技术

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100487658C (zh) * 2006-07-20 2009-05-13 中兴通讯股份有限公司 一种检测内存访问越界的方法
CN101290590B (zh) * 2008-06-03 2012-01-11 北京中星微电子有限公司 一种嵌入式操作系统中切换任务的方法和单元

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001061471A2 (en) * 2000-02-16 2001-08-23 Sun Microsystems, Inc. An implementation for nonblocking memory allocation
CN101317155A (zh) * 2005-12-27 2008-12-03 英特尔公司 本地用户级线程数据的数据结构和管理技术

Also Published As

Publication number Publication date
CN101630276A (zh) 2010-01-20
CN101630276B (zh) 2012-12-19

Similar Documents

Publication Publication Date Title
WO2011020281A1 (zh) 一种高效的内存池访问方法
US8424015B2 (en) Transactional memory preemption mechanism
US9626187B2 (en) Transactional memory system supporting unbroken suspended execution
JP7087029B2 (ja) 中央処理装置(cpu)と補助プロセッサとの間の改善した関数コールバック機構
US8689215B2 (en) Structured exception handling for application-managed thread units
US7590774B2 (en) Method and system for efficient context swapping
TW409227B (en) Method and apparatus for selecting thread switch events in a multithreaded processor
US8881153B2 (en) Speculative thread execution with hardware transactional memory
US8079035B2 (en) Data structure and management techniques for local user-level thread data
US8516483B2 (en) Transparent support for operating system services for a sequestered sequencer
CN108139946B (zh) 用于在冲突存在时进行有效任务调度的方法
US20140115594A1 (en) Mechanism to schedule threads on os-sequestered sequencers without operating system intervention
TWI231914B (en) Context pipelines
US8490181B2 (en) Deterministic serialization of access to shared resource in a multi-processor system for code instructions accessing resources in a non-deterministic order
EP2764433A1 (en) Maintaining operand liveness information in a computer system
JP2009537053A (ja) 仮想化されたトランザクショナルメモリのグローバルオーバーフロー方法
JP2012531680A (ja) システム管理モードのためのプロセッサにおける状態記憶の提供
US20130152100A1 (en) Method to guarantee real time processing of soft real-time operating system
JP2014085839A (ja) 並列実行機構及びその動作方法
WO2005048009A2 (en) Method and system for multithreaded processing using errands
JP4130465B2 (ja) メモリ転送処理サイズが異なるプロセッサに関してアトミックな処理を実行するための技術
JP2006092042A (ja) 情報処理装置及びコンテキスト切り替え方法
US20140223447A1 (en) Method and System For Exception-Less System Calls In An Operating System
Kim et al. Non-preemptive demand paging technique for NAND flash-based real-time embedded systems
US9141310B1 (en) Methods and apparatuses for shared state information among concurrently running processes or threads

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09848422

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09848422

Country of ref document: EP

Kind code of ref document: A1