WO2019205448A1 - Information management method for improving multi-core processor - Google Patents

Information management method for improving multi-core processor Download PDF

Info

Publication number
WO2019205448A1
WO2019205448A1 PCT/CN2018/105864 CN2018105864W WO2019205448A1 WO 2019205448 A1 WO2019205448 A1 WO 2019205448A1 CN 2018105864 W CN2018105864 W CN 2018105864W WO 2019205448 A1 WO2019205448 A1 WO 2019205448A1
Authority
WO
WIPO (PCT)
Prior art keywords
shared memory
processing unit
central processing
time delay
low
Prior art date
Application number
PCT/CN2018/105864
Other languages
French (fr)
Chinese (zh)
Inventor
谢享奇
李庭育
魏智汎
洪振洲
Original Assignee
江苏华存电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏华存电子科技有限公司 filed Critical 江苏华存电子科技有限公司
Publication of WO2019205448A1 publication Critical patent/WO2019205448A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox

Definitions

  • the invention relates to the technical field of multi-core processors, in particular to an information management method for improving multi-core processors.
  • the existing multi-core architecture synchronizes the respective L1 Data Cache contents of each processor through a Snoop Control Unit (SCU) interface, and uses a distributed interrupt controller (Distributed Interrupt Controller). Supports existing interrupt controllers.
  • Each processor's timer is independent of the supervisor (WatchDog).
  • Each processor maintains data cache consistency between processors through the snoop control unit.
  • the traditional control method is The main central processing unit (CPU) is responsible for the main control tasks, and the rest of the central processing unit is called the sub-central processing unit.
  • the general firmware is responsible for control and notification by the main central processing unit. It is relatively easy to use, but in the communication process, the information must be transmitted and controlled through the snooping control unit.
  • the firmware of each CPU itself cannot communicate with each other directly, so unnecessary transmission will occur. Delay in communication time.
  • the ability of the secondary CPUs to communicate with each other is also limited, and there is no efficient dynamic reversal requiring jurisdiction.
  • the main central processing unit is responsible for communicating between the front end and the host, and the secondary central processing unit is responsible for the data access control of the back end and the flash memory.
  • the main central processing unit is responsible for receiving the command of the host through the protocol layer of the association specification, and notifying the information processed by the sub-central processing unit through the snooping control unit, and the sub-central processing unit passively waits for the task allocation through the snooping control unit.
  • the information between them is very frequent, and the requirement for read/write efficiency is very important because the non-disappearing flash memory device is very important. Since the physical characteristics of the flash memory are such that the data cannot be rewritten after the data is written, additional flash memory must be used.
  • an information management method for improving a multi-core processor including a configuration of a shared memory of a main central processing unit and a configuration of a shared memory of a secondary central processing unit, wherein the primary central processing unit has a a low time delay shared memory write access and a second low time delay shared memory read access; the secondary CPU has a second low time delay shared memory write access and the first low time Delay read access to shared memory.
  • the main CPU shared memory is configured as follows:
  • the proprietary read indicator has a second low-latency shared memory dedicated read indicator
  • the main central processor can timely put the information and the tag code into the shared memory with low time delay and increase the write index by one;
  • the main central processor When receiving the front-end main system enters the idle mode or the power-saving mode, the main central processor starts the hardware system timer start timer to allow the sub-CPU to synchronously obtain the startup time;
  • the sub-central processor reads the index value at any time to actively interpret whether the main central processor has data transmission;
  • the secondary central processing unit processes the work requiring collaborative processing by reading the tag code and the low time delay shared memory content, and adding the exclusive read indicator;
  • the corresponding tag code and the processing message are filled in the second low-time delay shared memory and the dedicated write index is incremented by one;
  • the main central processing unit actively reads the second low time delay shared memory at any time by reading the index value exclusively;
  • the main central processing unit knows that the sub-CPU has completed the project through the corresponding tag code.
  • the sub-CPU shared memory is configured as follows:
  • the sub-CPU can timely put the information and the tag code into the low-time delay shared memory and increase the write index by one;
  • the back-end system When the back-end system has information, it needs to output information through the front-end universal asynchronous transceiver, and the secondary CPU fills in the message and sets the volume label;
  • the main central processor reads the index value at any time to actively interpret whether the sub-central processor has data transmission;
  • the main central processing unit shares the memory content by reading the tag code and low time delay
  • the sub-CPU reads the first low-latency shared memory at any time by reading the index value exclusively;
  • the secondary central processor knows the completed project of the main central processing unit through the corresponding tag code.
  • the present invention has the beneficial effects that the present invention provides one or more shared memories for the main central processing unit and the secondary central processing unit to directly control or read, and allows multiple cores with minimum time delay.
  • the processor can simultaneously send and read information and also provide the secondary CPU to reverse the dominant information processing and request the main CPU to assist in processing.
  • FIG. 1 is a flow chart showing the configuration of a shared memory of a main central processing unit of the present invention
  • FIG. 2 is a flow chart showing the configuration of the shared memory of the sub-CPU of the present invention.
  • the present invention provides a technical solution: an information management method for improving a multi-core processor, including a configuration of a shared memory of a main central processing unit and a configuration of a shared memory of a secondary central processing unit, which is owned by a main central processing unit.
  • the first low-latency shared memory write access and the second low-latency shared memory read access.
  • the secondary CPU has the write permission of the second low time delay shared memory and the read permission of the first low time delayed shared memory.
  • the low time delay shared memory includes the tag code.
  • the types of memory provided are various types such as write information, feedback information, message length, command code, system timer, and universal asynchronous data.
  • the main CPU shared memory is configured as follows:
  • the proprietary read indicator has a second low-latency shared memory dedicated read indicator
  • the main central processor can timely put the information and the tag code into the shared memory with low time delay and increase the write index by one;
  • the main central processor When receiving the front-end main system enters the idle mode or the power-saving mode, the main central processor starts the hardware system timer start timer to allow the sub-CPU to synchronously obtain the startup time;
  • the sub-central processor reads the index value at any time to actively interpret whether the main central processor has data transmission;
  • the secondary central processing unit processes the work requiring collaborative processing by reading the tag code and the low time delay shared memory content, and adding the exclusive read indicator;
  • the corresponding tag code and the processing message are filled in the second low-time delay shared memory and the dedicated write index is incremented by one;
  • the main central processing unit actively reads the second low time delay shared memory at any time by reading the index value exclusively;
  • the main central processing unit knows that the sub-CPU has completed the project through the corresponding tag code.
  • the configuration of the shared memory of the secondary CPU is as follows:
  • the sub-CPU can timely put the information and the tag code into the low-time delay shared memory and increase the write index by one;
  • the back-end system When the back-end system has information, it needs to output information through the front-end universal asynchronous transceiver, and the secondary CPU fills in the message and sets the volume label;
  • the main central processor reads the index value at any time to actively interpret whether the sub-central processor has data transmission;
  • the main central processing unit shares the memory content by reading the tag code and low time delay
  • the sub-CPU reads the first low-latency shared memory at any time by reading the index value exclusively;
  • the secondary central processor knows the completed project of the main central processing unit through the corresponding tag code.
  • the present invention provides one or more shared memories that allow the main central processing unit and the secondary central processing unit to directly control or read, allowing the multi-core processor to simultaneously transmit and read information with minimal time delay. It also provides a secondary central processor that can reverse the dominant information processing and ask the main central processor to assist with processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

An information management method for improving a multi-core processor, comprising configuration of a shared memory of a main central processing unit, and configuration of a shared memory of a secondary central processing unit. The main central processing unit has write permissions to a first low-time delay shared memory and read permissions to a second low-time delay shared memory. On the contrary, the secondary central processing unit has write permissions to the second low-time delay shared memory and read permissions to the first low-time delay shared memory. The low-time delay shared memory comprises a tag code, and provides various memory patterns, such as write information, feedback information, information length, a command code, a system timer, and universal asynchronous receiver/transmitter data. In the method, one or more shared memories are provided, and thus, the main central processing unit and the secondary central processing unit can both perform direct control or read, the multi-core processor can simultaneously transmit and read information within minimal time delay, and the secondary central processing unit can also reversely lead information processing and request the main central processing unit for assistance in processing.

Description

一种提升多核处理器的信息管理方法Information management method for improving multi-core processor 技术领域Technical field
本发明涉及多核处理器技术领域,具体为一种提升多核处理器的信息管理方法。The invention relates to the technical field of multi-core processors, in particular to an information management method for improving multi-core processors.
背景技术Background technique
现存多核心的架构是透过窥探控制单元(Snoop Control Unit,简称SCU)接口同步每个处理器各自的1级数据缓存(L1 Data Cache)内容,并以分布式中断控制器(Distributed Interrupt Controller)支持既有的中断控制器,每个处理器的计时器与监管机构(WatchDog)各自独立,每个处理器都透过窥探控制单元在处理器之间维护数据缓存的一致性,传统控制方式是以一个主中央处理器(CPU)负责主要控制任务,其余中央处理器称为副中央处理器,在这种架构下一般固件是以主中央处理器来负责控制与通知,此方式虽然在控制上较为单纯易用,但在沟通的流程上由于均须透过窥探控制单元来作信息的传递与控制,各中央处理器本身的固件并无法直接彼此沟通,因此在传输动作上会产生不必要的沟通时间延迟。副中央处理器彼此间的信息交流能力也受到限制,也无法有效率的动态反向要求控制权的管辖。The existing multi-core architecture synchronizes the respective L1 Data Cache contents of each processor through a Snoop Control Unit (SCU) interface, and uses a distributed interrupt controller (Distributed Interrupt Controller). Supports existing interrupt controllers. Each processor's timer is independent of the supervisor (WatchDog). Each processor maintains data cache consistency between processors through the snoop control unit. The traditional control method is The main central processing unit (CPU) is responsible for the main control tasks, and the rest of the central processing unit is called the sub-central processing unit. In this architecture, the general firmware is responsible for control and notification by the main central processing unit. It is relatively easy to use, but in the communication process, the information must be transmitted and controlled through the snooping control unit. The firmware of each CPU itself cannot communicate with each other directly, so unnecessary transmission will occur. Delay in communication time. The ability of the secondary CPUs to communicate with each other is also limited, and there is no efficient dynamic reversal requiring jurisdiction.
在非消失性的闪存存储器装置控制中,会规画主中央处理器负责前端与主机间作沟通,副中央处理器则负责后端与闪存相关的资料访问控制。主中央处理器透过协会规范的协议层负责接收主机的命令,将须由副中央处理器处理的信息透过窥探控制单元通知,副中央处理器则一样透过窥探控制单元被动等待任务分配。彼次间的信息往返非常频繁,由于非消失性的闪存存储器装置在读写效率的要求非常重要,由于闪存的物理特性为写入数据后不可以重复再写入数据,必须要使用额外的闪存块整理已写入的数据,所以在闪存数据读取与写入的的时间延迟有一部分是无法避免的,因此在整个高速闪存存储器装置控制芯片中,如何透过在多核心的架构下,让中央处理器之间 做到更有效的沟通并减少不必要的时间延迟,且如何让多核心中央处理器彼此间能达到更弹性的多向式主动控制与及时讯息反馈也变成一个非常重要的课题。In the non-disappearing flash memory device control, the main central processing unit is responsible for communicating between the front end and the host, and the secondary central processing unit is responsible for the data access control of the back end and the flash memory. The main central processing unit is responsible for receiving the command of the host through the protocol layer of the association specification, and notifying the information processed by the sub-central processing unit through the snooping control unit, and the sub-central processing unit passively waits for the task allocation through the snooping control unit. The information between them is very frequent, and the requirement for read/write efficiency is very important because the non-disappearing flash memory device is very important. Since the physical characteristics of the flash memory are such that the data cannot be rewritten after the data is written, additional flash memory must be used. Blocking the data that has been written, so part of the time delay in flash data read and write is unavoidable, so in the entire high-speed flash memory device control chip, how to make it through the multi-core architecture More efficient communication between CPUs and unnecessary time delays, and how to enable multi-core CPUs to achieve more flexible multi-directional active control and timely message feedback becomes a very important Question.
发明内容Summary of the invention
本发明的目的在于提供一种提升多核处理器的信息管理方法,以解决上述背景技术中提出的问题。It is an object of the present invention to provide an information management method for improving a multi-core processor to solve the problems raised in the above background art.
为实现上述目的,本发明提供如下技术方案:一种提升多核处理器的信息管理方法,包括主中央处理器共享内存的配置和副中央处理器共享内存的配置,所述主中央处理器拥有第一块低时间延迟共享内存的写入权限与第二块低时间延迟共享内存的读取权限;所述副中央处理器拥有第二块低时间延迟共享内存的写入权限与第一块低时间延迟共享内存的读取权限。In order to achieve the above object, the present invention provides the following technical solution: an information management method for improving a multi-core processor, including a configuration of a shared memory of a main central processing unit and a configuration of a shared memory of a secondary central processing unit, wherein the primary central processing unit has a a low time delay shared memory write access and a second low time delay shared memory read access; the secondary CPU has a second low time delay shared memory write access and the first low time Delay read access to shared memory.
优选的,所述主中央处理器共享内存的配置方式如下:Preferably, the main CPU shared memory is configured as follows:
A、配置两块低时间延迟控制共享内存于主中央处理器与副中央处理器之间并设定主中央处理器通过专属写入指标拥有第一块低时间延迟共享内存的写入权限并通过专属读取指标拥有第二块低时间延迟共享内存的专属读取指标;A. Configure two low time delay control shared memory between the main central processing unit and the secondary central processing unit and set the main central processing unit to have the write permission of the first low time delay shared memory through the dedicated write indicator and pass The proprietary read indicator has a second low-latency shared memory dedicated read indicator;
B、设定副中央处理器通过专属写入指标拥有第二块低时间延迟共享内存的写入权限并通过专属读取指标拥有第一块低时间延迟共享内存的专属读取指标;B. Setting the sub-CPU to have the write permission of the second low-time delay shared memory through the dedicated write indicator and having the exclusive read index of the first low-time-delay shared memory through the exclusive read indicator;
C、当处理器前端有信息传入,主中央处理器可及时将信息及标签码放入低时间延迟共享内存并将写入指标加1;C. When the front end of the processor has information, the main central processor can timely put the information and the tag code into the shared memory with low time delay and increase the write index by one;
D、当收到前端主系统进入空闲模式或省电模式,主中央处理器启动硬件系统定时器启动定时器让副中央处理器同步获得启动时间;D. When receiving the front-end main system enters the idle mode or the power-saving mode, the main central processor starts the hardware system timer start timer to allow the sub-CPU to synchronously obtain the startup time;
E、副中央处理器透专属读取指标数值随时主动判读主中央处理器是否有数据传送进来;E. The sub-central processor reads the index value at any time to actively interpret whether the main central processor has data transmission;
F、副中央处理器通过读取标签码及低时间延迟共享内存内容及时处理需要协同处理的工作并将专属读取指标加;F. The secondary central processing unit processes the work requiring collaborative processing by reading the tag code and the low time delay shared memory content, and adding the exclusive read indicator;
G、副中央处理器处理完后将对应标签码及处理讯息填入第二块低时间延迟共享内存并将专属写入指标加1;G. After the sub-CPU is processed, the corresponding tag code and the processing message are filled in the second low-time delay shared memory and the dedicated write index is incremented by one;
H、主中央处理器通过专属读取指标数值随时主动判读第二块低时间延迟共享内存是否有信息进入;H. The main central processing unit actively reads the second low time delay shared memory at any time by reading the index value exclusively;
I、主中央处理器通过对应标签码了解副中央处理器已完成项目。I. The main central processing unit knows that the sub-CPU has completed the project through the corresponding tag code.
优选的,所述副中央处理器共享内存的配置方式如下:Preferably, the sub-CPU shared memory is configured as follows:
A、当处理器后端有信息通知,副中央处理器可及时将信息及标签码放入低时间延迟共享内存并将写入指标加1;A. When the processor backend has information notification, the sub-CPU can timely put the information and the tag code into the low-time delay shared memory and increase the write index by one;
B、当后端系统有信息需透过前端通用异步收发传输器输出信息,副中央处理器将讯息填入并设定卷标码;B. When the back-end system has information, it needs to output information through the front-end universal asynchronous transceiver, and the secondary CPU fills in the message and sets the volume label;
C、主中央处理器透专属读取指标数值随时主动判读副中央处理器是否有数据传送进来;C. The main central processor reads the index value at any time to actively interpret whether the sub-central processor has data transmission;
D、主中央处理器透过读取标签码及低时间延迟共享内存内容;D. The main central processing unit shares the memory content by reading the tag code and low time delay;
E、及时处理需要协同处理的工作并将专属读取指标加1;E. Timely processing work that requires collaborative processing and adding 1 to the exclusive reading indicator;
F、主中央处理器处理完后将对应标签码及处理讯息填入第一块低时间延迟共享内存并将专属写入指标加1;F. After the main central processing unit finishes processing, the corresponding tag code and the processing message are filled in the first low time delay shared memory and the dedicated write index is incremented by one;
G、副中央处理器透过专属读取指标数值随时主动判读第一块低时间延迟共享内存是否有信息进入;G. The sub-CPU reads the first low-latency shared memory at any time by reading the index value exclusively;
H、副中央处理器透过对应标签码了解主中央处理器已完成项目。H. The secondary central processor knows the completed project of the main central processing unit through the corresponding tag code.
与现有技术相比,本发明的有益效果是:本发明提供一块或多块共享内存能让主中央处理器与副中央处理器都能直接控制或读取,在最小的时间延迟下让多核处理器能同步发送与读取信息也提供副中央处理器能反向主导信息处理并请主中央处理器协助处理。Compared with the prior art, the present invention has the beneficial effects that the present invention provides one or more shared memories for the main central processing unit and the secondary central processing unit to directly control or read, and allows multiple cores with minimum time delay. The processor can simultaneously send and read information and also provide the secondary CPU to reverse the dominant information processing and request the main CPU to assist in processing.
附图说明DRAWINGS
图1为本发明主中央处理器共享内存的配置流程图;1 is a flow chart showing the configuration of a shared memory of a main central processing unit of the present invention;
图2为本发明副中央处理器共享内存的配置流程图。2 is a flow chart showing the configuration of the shared memory of the sub-CPU of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
请参阅图1-2,本发明提供一种技术方案:一种提升多核处理器的信息管理方法,包括主中央处理器共享内存的配置和副中央处理器共享内存的配置,主中央处理器拥有第一块低时间延迟共享内存的写入权限与第二块低时间延迟共享内存的读取权限。反之副中央处理器拥有第二块低时间延迟共享内存的写入权限与第一块低时间延迟共享内存的读取权限。低时间延迟共享内存包含标签码提供内存的型态为写入信息、反馈信息、信息长度、命令码、系统定时器、通用异步收发数据等各种类别。Referring to FIG. 1-2, the present invention provides a technical solution: an information management method for improving a multi-core processor, including a configuration of a shared memory of a main central processing unit and a configuration of a shared memory of a secondary central processing unit, which is owned by a main central processing unit. The first low-latency shared memory write access and the second low-latency shared memory read access. Otherwise, the secondary CPU has the write permission of the second low time delay shared memory and the read permission of the first low time delayed shared memory. The low time delay shared memory includes the tag code. The types of memory provided are various types such as write information, feedback information, message length, command code, system timer, and universal asynchronous data.
本发明中,主中央处理器共享内存的配置方式如下:In the present invention, the main CPU shared memory is configured as follows:
A、配置两块低时间延迟控制共享内存于主中央处理器与副中央处理器之间并设定主中央处理器通过专属写入指标拥有第一块低时间延迟共享内存的写入权限并通过专属读取指标拥有第二块低时间延迟共享内存的专属读取指标;A. Configure two low time delay control shared memory between the main central processing unit and the secondary central processing unit and set the main central processing unit to have the write permission of the first low time delay shared memory through the dedicated write indicator and pass The proprietary read indicator has a second low-latency shared memory dedicated read indicator;
B、设定副中央处理器通过专属写入指标拥有第二块低时间延迟共享内存的写入权限并通过专属读取指标拥有第一块低时间延迟共享内存的专属读取指标;B. Setting the sub-CPU to have the write permission of the second low-time delay shared memory through the dedicated write indicator and having the exclusive read index of the first low-time-delay shared memory through the exclusive read indicator;
C、当处理器前端有信息传入,主中央处理器可及时将信息及标签码放入低时间延迟共享内存并将写入指标加1;C. When the front end of the processor has information, the main central processor can timely put the information and the tag code into the shared memory with low time delay and increase the write index by one;
D、当收到前端主系统进入空闲模式或省电模式,主中央处理器启动硬件系统定时器启动定时器让副中央处理器同步获得启动时间;D. When receiving the front-end main system enters the idle mode or the power-saving mode, the main central processor starts the hardware system timer start timer to allow the sub-CPU to synchronously obtain the startup time;
E、副中央处理器透专属读取指标数值随时主动判读主中央处理器是否有数据传送进来;E. The sub-central processor reads the index value at any time to actively interpret whether the main central processor has data transmission;
F、副中央处理器通过读取标签码及低时间延迟共享内存内容及时处理需要协同处理的工作并将专属读取指标加;F. The secondary central processing unit processes the work requiring collaborative processing by reading the tag code and the low time delay shared memory content, and adding the exclusive read indicator;
G、副中央处理器处理完后将对应标签码及处理讯息填入第二块低时间延迟共享内存并将专属写入指标加1;G. After the sub-CPU is processed, the corresponding tag code and the processing message are filled in the second low-time delay shared memory and the dedicated write index is incremented by one;
H、主中央处理器通过专属读取指标数值随时主动判读第二块低时间延迟共享内存是否有信息进入;H. The main central processing unit actively reads the second low time delay shared memory at any time by reading the index value exclusively;
I、主中央处理器通过对应标签码了解副中央处理器已完成项目。I. The main central processing unit knows that the sub-CPU has completed the project through the corresponding tag code.
本发明中,副中央处理器共享内存的配置方式如下:In the present invention, the configuration of the shared memory of the secondary CPU is as follows:
A、当处理器后端有信息通知,副中央处理器可及时将信息及标签码放入低时间延迟共享内存并将写入指标加1;A. When the processor backend has information notification, the sub-CPU can timely put the information and the tag code into the low-time delay shared memory and increase the write index by one;
B、当后端系统有信息需透过前端通用异步收发传输器输出信息,副中央处理器将讯息填入并设定卷标码;B. When the back-end system has information, it needs to output information through the front-end universal asynchronous transceiver, and the secondary CPU fills in the message and sets the volume label;
C、主中央处理器透专属读取指标数值随时主动判读副中央处理器是否有数据传送进来;C. The main central processor reads the index value at any time to actively interpret whether the sub-central processor has data transmission;
D、主中央处理器透过读取标签码及低时间延迟共享内存内容;D. The main central processing unit shares the memory content by reading the tag code and low time delay;
E、及时处理需要协同处理的工作并将专属读取指标加1;E. Timely processing work that requires collaborative processing and adding 1 to the exclusive reading indicator;
F、主中央处理器处理完后将对应标签码及处理讯息填入第一块低时间延迟共享内存并将专属写入指标加1;F. After the main central processing unit finishes processing, the corresponding tag code and the processing message are filled in the first low time delay shared memory and the dedicated write index is incremented by one;
G、副中央处理器透过专属读取指标数值随时主动判读第一块低时间延迟共享内存是否有信息进入;G. The sub-CPU reads the first low-latency shared memory at any time by reading the index value exclusively;
H、副中央处理器透过对应标签码了解主中央处理器已完成项目。H. The secondary central processor knows the completed project of the main central processing unit through the corresponding tag code.
综上所述,本发明提供一块或多块共享内存能让主中央处理器与副中央处理器都能直接控制或读取,在最小的时间延迟下让多核处理器能同步发送与读取信息也提供副中央处理器能反向主导信息处理并请主中央处理器协助处理。In summary, the present invention provides one or more shared memories that allow the main central processing unit and the secondary central processing unit to directly control or read, allowing the multi-core processor to simultaneously transmit and read information with minimal time delay. It also provides a secondary central processor that can reverse the dominant information processing and ask the main central processor to assist with processing.
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。While the embodiments of the present invention have been shown and described, it will be understood by those skilled in the art The scope of the invention is defined by the appended claims and their equivalents.

Claims (3)

  1. 一种提升多核处理器的信息管理方法,包括主中央处理器共享内存的配置和副中央处理器共享内存的配置,其特征在于:所述主中央处理器拥有第一块低时间延迟共享内存的写入权限与第二块低时间延迟共享内存的读取权限;所述副中央处理器拥有第二块低时间延迟共享内存的写入权限与第一块低时间延迟共享内存的读取权限。An information management method for improving a multi-core processor, comprising: a configuration of a shared memory of a main central processing unit and a configuration of a shared memory of a secondary central processing unit, wherein the main central processing unit has a first low time delay shared memory The write permission and the read permission of the second low time delay shared memory; the secondary CPU has the write permission of the second low time delay shared memory and the read permission of the first low time delayed shared memory.
  2. 根据权利要求1所述的一种提升多核处理器的信息管理方法,其特征在于:所述主中央处理器共享内存的配置方式如下:The information management method for upgrading a multi-core processor according to claim 1, wherein the configuration of the shared memory of the main central processing unit is as follows:
    A、配置两块低时间延迟控制共享内存于主中央处理器与副中央处理器之间并设定主中央处理器通过专属写入指标拥有第一块低时间延迟共享内存的写入权限并通过专属读取指标拥有第二块低时间延迟共享内存的专属读取指标;A. Configure two low time delay control shared memory between the main central processing unit and the secondary central processing unit and set the main central processing unit to have the write permission of the first low time delay shared memory through the dedicated write indicator and pass The proprietary read indicator has a second low-latency shared memory dedicated read indicator;
    B、设定副中央处理器通过专属写入指标拥有第二块低时间延迟共享内存的写入权限并通过专属读取指标拥有第一块低时间延迟共享内存的专属读取指标;B. Setting the sub-CPU to have the write permission of the second low-time delay shared memory through the dedicated write indicator and having the exclusive read index of the first low-time-delay shared memory through the exclusive read indicator;
    C、当处理器前端有信息传入,主中央处理器可及时将信息及标签码放入低时间延迟共享内存并将写入指标加1;C. When the front end of the processor has information, the main central processor can timely put the information and the tag code into the shared memory with low time delay and increase the write index by one;
    D、当收到前端主系统进入空闲模式或省电模式,主中央处理器启动硬件系统定时器启动定时器让副中央处理器同步获得启动时间;D. When receiving the front-end main system enters the idle mode or the power-saving mode, the main central processor starts the hardware system timer start timer to allow the sub-CPU to synchronously obtain the startup time;
    E、副中央处理器透专属读取指标数值随时主动判读主中央处理器是否有数据传送进来;E. The sub-central processor reads the index value at any time to actively interpret whether the main central processor has data transmission;
    F、副中央处理器通过读取标签码及低时间延迟共享内存内容及时处理需要协同处理的工作并将专属读取指标加;F. The secondary central processing unit processes the work requiring collaborative processing by reading the tag code and the low time delay shared memory content, and adding the exclusive read indicator;
    G、副中央处理器处理完后将对应标签码及处理讯息填入第二块低时间延迟共享内存并将专属写入指标加1;G. After the sub-CPU is processed, the corresponding tag code and the processing message are filled in the second low-time delay shared memory and the dedicated write index is incremented by one;
    H、主中央处理器通过专属读取指标数值随时主动判读第二块低时间延迟 共享内存是否有信息进入;H, the main central processing unit actively reads the second low time delay through the exclusive reading of the indicator value, whether the shared memory has information to enter;
    I、主中央处理器通过对应标签码了解副中央处理器已完成项目。I. The main central processing unit knows that the sub-CPU has completed the project through the corresponding tag code.
  3. 根据权利要求1所述的一种提升多核处理器的信息管理方法,其特征在于:所述副中央处理器共享内存的配置方式如下:The method for managing information of a multi-core processor according to claim 1, wherein the configuration of the shared memory of the secondary CPU is as follows:
    A、当处理器后端有信息通知,副中央处理器可及时将信息及标签码放入低时间延迟共享内存并将写入指标加1;A. When the processor backend has information notification, the sub-CPU can timely put the information and the tag code into the low-time delay shared memory and increase the write index by one;
    B、当后端系统有信息需透过前端通用异步收发传输器输出信息,副中央处理器将讯息填入并设定卷标码;B. When the back-end system has information, it needs to output information through the front-end universal asynchronous transceiver, and the secondary CPU fills in the message and sets the volume label;
    C、主中央处理器透专属读取指标数值随时主动判读副中央处理器是否有数据传送进来;C. The main central processor reads the index value at any time to actively interpret whether the sub-central processor has data transmission;
    D、主中央处理器透过读取标签码及低时间延迟共享内存内容;D. The main central processing unit shares the memory content by reading the tag code and low time delay;
    E、及时处理需要协同处理的工作并将专属读取指标加1;E. Timely processing work that requires collaborative processing and adding 1 to the exclusive reading indicator;
    F、主中央处理器处理完后将对应标签码及处理讯息填入第一块低时间延迟共享内存并将专属写入指标加1;F. After the main central processing unit finishes processing, the corresponding tag code and the processing message are filled in the first low time delay shared memory and the dedicated write index is incremented by one;
    G、副中央处理器透过专属读取指标数值随时主动判读第一块低时间延迟共享内存是否有信息进入;G. The sub-CPU reads the first low-latency shared memory at any time by reading the index value exclusively;
    H、副中央处理器透过对应标签码了解主中央处理器已完成项目。H. The secondary central processor knows the completed project of the main central processing unit through the corresponding tag code.
PCT/CN2018/105864 2018-04-27 2018-09-14 Information management method for improving multi-core processor WO2019205448A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810395347.9 2018-04-27
CN201810395347.9A CN108829631A (en) 2018-04-27 2018-04-27 A kind of approaches to IM promoting multi-core processor

Publications (1)

Publication Number Publication Date
WO2019205448A1 true WO2019205448A1 (en) 2019-10-31

Family

ID=64155080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105864 WO2019205448A1 (en) 2018-04-27 2018-09-14 Information management method for improving multi-core processor

Country Status (2)

Country Link
CN (1) CN108829631A (en)
WO (1) WO2019205448A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198343A (en) * 2019-04-25 2019-09-03 视联动力信息技术股份有限公司 Control method and main control server based on view networking
CN112100093B (en) * 2020-08-18 2023-11-21 海光信息技术股份有限公司 Method for maintaining consistency of multiprocessor shared memory data and multiprocessor system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110283067A1 (en) * 2010-05-11 2011-11-17 International Business Machines Corporation Target Memory Hierarchy Specification in a Multi-Core Computer Processing System
CN104991868A (en) * 2015-06-09 2015-10-21 浪潮(北京)电子信息产业有限公司 Multi-core processor system and cache coherency processing method
CN105988970A (en) * 2015-02-12 2016-10-05 华为技术有限公司 Processor of shared storage data, and chip
CN106844048A (en) * 2017-01-13 2017-06-13 上海交通大学 Distributed shared memory method and system based on ardware feature

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69227956T2 (en) * 1991-07-18 1999-06-10 Tandem Computers Inc., Cupertino, Calif. Multiprocessor system with mirrored memory
CN1276371C (en) * 2004-03-31 2006-09-20 港湾网络有限公司 Double CPU communication systems based on PCI shared memory
CN100573497C (en) * 2007-12-26 2009-12-23 杭州华三通信技术有限公司 Communication means and system between a kind of multinuclear multiple operating system
CN101246466B (en) * 2007-11-29 2012-06-20 华为技术有限公司 Management method and device for sharing internal memory in multi-core system
CN101178701B (en) * 2007-12-11 2010-07-21 华为技术有限公司 Communicating method and system between multi-processor
WO2012069831A1 (en) * 2010-11-24 2012-05-31 Tte Systems Ltd Method and arrangement for a multi-core system
CN102541805A (en) * 2010-12-09 2012-07-04 沈阳高精数控技术有限公司 Multi-processor communication method based on shared memory and realizing device thereof
CN107632945A (en) * 2016-07-18 2018-01-26 大唐移动通信设备有限公司 The data read-write method and device of a kind of shared drive
CN107562685B (en) * 2017-09-12 2020-06-09 南京国电南自电网自动化有限公司 Method for data interaction between multi-core processor cores based on delay compensation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110283067A1 (en) * 2010-05-11 2011-11-17 International Business Machines Corporation Target Memory Hierarchy Specification in a Multi-Core Computer Processing System
CN105988970A (en) * 2015-02-12 2016-10-05 华为技术有限公司 Processor of shared storage data, and chip
CN104991868A (en) * 2015-06-09 2015-10-21 浪潮(北京)电子信息产业有限公司 Multi-core processor system and cache coherency processing method
CN106844048A (en) * 2017-01-13 2017-06-13 上海交通大学 Distributed shared memory method and system based on ardware feature

Also Published As

Publication number Publication date
CN108829631A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
US10948963B2 (en) Message handling unit
WO2020001459A1 (en) Data processing method, remote direct memory access network card, and device
US10331595B2 (en) Collaborative hardware interaction by multiple entities using a shared queue
US10540306B2 (en) Data copying method, direct memory access controller, and computer system
US7797563B1 (en) System and method for conserving power
US7549024B2 (en) Multi-processing system with coherent and non-coherent modes
US9098302B2 (en) System and apparatus to improve boot speed in serial peripheral interface system using a baseboard management controller
US9170949B2 (en) Simplified controller with partial coherency
US11461151B2 (en) Controller address contention assumption
TW201015318A (en) Performance based cache management
WO2019205448A1 (en) Information management method for improving multi-core processor
JP2532191B2 (en) A method of managing data transmission for use in a computing system having a dual bus architecture.
US6782439B2 (en) Bus system and execution scheduling method for access commands thereof
US20070073977A1 (en) Early global observation point for a uniprocessor system
EP4124963A1 (en) System, apparatus and methods for handling consistent memory transactions according to a cxl protocol
KR20050043303A (en) High speed data transmission method using direct memory access method in multi-processors condition and apparatus therefor
KR101695845B1 (en) Apparatus and method for maintaining cache coherency, and multiprocessor apparatus using the method
WO2015158264A1 (en) Method for controlling memory chip, chip controller, and memory controller
DE112016002462T5 (en) Handling a partition reset in a multi-root system
CN114356839B (en) Method, device, processor and device readable storage medium for processing write operation
US9183149B2 (en) Multiprocessor system and method for managing cache memory thereof
US20160188470A1 (en) Promotion of a cache line sharer to cache line owner
US10372638B2 (en) Interconnect agent
JP5861496B2 (en) Multiprocessor device and power control method for multiprocessor device
US9268722B1 (en) Sharing memory using processor wait states

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18916621

Country of ref document: EP

Kind code of ref document: A1