CN112667300A - Processor data access method and management device based on multiprocessor system - Google Patents

Processor data access method and management device based on multiprocessor system Download PDF

Info

Publication number
CN112667300A
CN112667300A CN202011617431.4A CN202011617431A CN112667300A CN 112667300 A CN112667300 A CN 112667300A CN 202011617431 A CN202011617431 A CN 202011617431A CN 112667300 A CN112667300 A CN 112667300A
Authority
CN
China
Prior art keywords
processor
processor data
data memory
memory space
code segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011617431.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Eeasy Electronic Tech Co ltd
Original Assignee
Zhuhai Eeasy Electronic Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Eeasy Electronic Tech Co ltd filed Critical Zhuhai Eeasy Electronic Tech Co ltd
Priority to CN202011617431.4A priority Critical patent/CN112667300A/en
Publication of CN112667300A publication Critical patent/CN112667300A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a processor data access method and a management device based on a multiprocessor system, wherein the method comprises the following steps: defining processor-specific code segments in a program, adding the code segments to a link script file, and linking all processor data to the code segments through a compile option; starting a system, respectively distributing processor data memory spaces for a plurality of processors, and recording offset values of the processor data memory spaces; when the system runs, the processor acquires the deviant of the memory space of the processor data according to the identification number and accesses the processor data. The invention separates the processor data areas of different CPUs by arranging the processor data memory, so that the CPUs are not interfered with each other when accessing the respective processor data, thereby preventing the low efficiency caused by cache bumping, and minimizing the problems possibly caused by misoperation such as array boundary crossing, and the like, thereby efficiently accessing the processor data and improving the system stability.

Description

Processor data access method and management device based on multiprocessor system
Technical Field
The present invention relates to the field of processor data access technologies, and in particular, to a processor data access method and a management device based on a multiprocessor system.
Background
With the development of chip technology, multiprocessor systems have already become the mainstream of middle-high-end application fields, and are widely applied to the fields of desktop computers, consumer electronics, video security, servers and the like. During the operation of the multiprocessor system, each CPU has some general-purpose data, which needs to be accessed and stored in real time, called processor data. The data is used for describing the running state of each CPU, such as the real-time load rate of the CPU, the number of tasks of the CPU in a ready state, the electrified state of the CPU, the current task of the CPU and the like. Wherein the load rate reflects the real-time busy degree of the CPU; the ready task number reflects the saturation condition of the tasks on the CPU; the charged state comprises normal work, power failure and the like; the current task represents a task that the CPU is running. During normal operation of the system, processor data are frequently accessed to provide a basis for task scheduling and other activities. Therefore, how to efficiently access the processor data has a very important meaning to the overall performance of the system.
Processor data is of a wide variety, and different classes of data may be described by different structure types. In the conventional method, for each type of processor data, a global structure array is used for describing the processor data, and the array length is the number of processors. And the use of the access mode of the array may cause Cache thrashing effect among the multiple processors, thereby reducing the overall performance of the system. The array of a certain type of processor data contains the same type of processing data of each CPU, and these data are closely connected, so that they are likely to be in the same Cache Line (Cache Line). Assuming that the CPU1 modified the memory cells of the CPU1 DATA1, the Cache line status would become Dirty (see terminology interpretation), after which Cache synchronization would be initiated if the CPU2 modified the memory of the CPU2 DATA 1. The system writes the cache line back to main memory, modifies the position of CPU2 DATA1 in the cache line, and resets the cache line state to Dirty. This process automatically adds a time consuming operation of write-back main memory once. If the CPUs frequently and crossly read and write the DATA1 array, the DATA1 array is frequently written back to the main memory, which causes Cache thrashing effect and reduces the overall performance of the system.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a processor data access method based on a multiprocessor system, which can efficiently access processor data and prevent Cache thrashing.
The invention also provides a multiprocessor system-based processor data access management device with the multiprocessor system-based processor data access method.
The invention also provides a computer readable storage medium with the processor data access method based on the multiprocessor system.
The method for accessing the data of the processor based on the multiprocessor system comprises the following steps: s100, defining processor-specific code segments in a program, adding the code segments to a link script file, and linking all processor data to the code segments through compiling options; s200, starting a system, respectively distributing processor data memory spaces for a plurality of processors, and recording offset values of the processor data memory spaces; s300, when the system runs, the processor acquires the deviant of the processor data memory space according to the identification number and accesses the processor data.
The processor data access method based on the multiprocessor system, provided by the embodiment of the invention, has at least the following beneficial effects: by arranging the processor data memory, processor data areas of different CPUs are separated from each other and are not in the same Cache line, so that the low efficiency caused by Cache bumping is effectively prevented, and the data access efficiency is improved; when accessing respective processor data (key data influencing system operation), the cpus are not interfered with each other, so that problems possibly caused by misoperation such as array boundary crossing are reduced to the minimum, and the system stability is improved.
According to some embodiments of the invention, said step S100 further comprises: and after compiling and linking, obtaining the processor data memory space of a first processor through the starting and ending positions of the code segments obtained by the link script file, wherein the first processor is used for starting a system.
According to some embodiments of the invention, said step S200 comprises: s210, starting a system, and distributing the processor data memory space for a plurality of second processors according to a starting sequence, wherein the second processors are processors except the first processor in the system; s220, calculating the offset value of the processor data memory space corresponding to the second processor according to the starting address of the processor data memory space of the first processor.
According to some embodiments of the present invention, the method for calculating the offset value of the processor data memory space corresponding to the second processor comprises: subtracting a starting address of the processor data memory space of the first processor from a starting address of the processor data memory space of the second processor.
According to some embodiments of the invention, said step S200 further comprises: and storing the deviant of the memory space of the processor into a global array according to the identification number of the processor.
According to some embodiments of the invention, said step S200 further comprises: and when the processor data memory space is distributed to the processors, certain intervals exist among the processor data memory spaces of different processors.
According to some embodiments of the invention, further comprising: and according to the interval between the processor data memory spaces, performing memory out-of-range detection on the access of the processor data.
According to some embodiments of the invention, said step S300 further comprises: and the processor accesses the processor data in a macro definition mode.
A processor data access management apparatus based on a multiprocessor system according to an embodiment of a second aspect of the present invention includes: a compiling link module for defining processor-specific code segments in a program, adding the code segments to a link script file, and linking all processor data to the code segments through a compiling option; the system comprises a starting distribution module, a data memory space management module and a data memory space management module, wherein the starting distribution module is used for respectively distributing processor data memory spaces for a plurality of processors and recording offset values of the processor data memory spaces when the system is started; and the access control module is used for acquiring the deviant of the processor data memory space according to the processor identification number when the system runs and accessing the processor data.
The processor data access management device based on the multiprocessor system, which is provided by the embodiment of the invention, at least has the following beneficial effects: by arranging the processor data memory, processor data areas of different CPUs are separated from each other and are not in the same Cache line, so that the low efficiency caused by Cache bumping is effectively prevented, and the data access efficiency is improved; when accessing respective processor data (key data influencing system operation), the cpus are not interfered with each other, so that problems possibly caused by misoperation such as array boundary crossing are reduced to the minimum, and the system stability is improved.
A computer-readable storage medium according to an embodiment of the third aspect of the invention has stored thereon a computer program which, when executed by a processor, implements a method according to an embodiment of the first aspect of the invention.
The computer-readable storage medium according to an embodiment of the present invention has at least the same advantageous effects as the method according to an embodiment of the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a layout of processing data in the method according to the embodiment of the invention;
FIG. 3 is a block diagram of the modules of the system of an embodiment of the present invention.
Reference numerals:
compiling and linking module 100, starting distribution module 200 and access control module 300.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and more than, less than, more than, etc. are understood as excluding the present number, and more than, less than, etc. are understood as including the present number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Referring to fig. 1, a method of an embodiment of the present invention includes: s100, defining specific code segments of a processor in a program, adding the code segments to a link script file, and linking all processor data to the code segments through compiling options; s200, starting a system, respectively distributing processor data memory spaces for a plurality of processors, and recording offset values of the processor data memory spaces; and S300, when the system runs, the processor acquires the deviant of the memory space of the processor data according to the identification number and accesses the processor data.
The memory layout of the processor data according to the embodiment of the present invention is shown in fig. 2. First, a processor DATA specific code segment, such as a named CPU _ DATA _ SECTION, is defined in the program and added to a Link script File (Scatter File). All processor data is stored in this segment via the compiler option. In the compile link phase, all processor DATA will be linked into the CPU _ DATA _ SECTION segment, i.e., stored to the CPU1 DATA AREA interval as shown in FIG. 2. The starting and ending positions of the code segment can be read through the link file, and the starting address of the processing DATA memory space CPU1 DATA AREA corresponding to the first processor (i.e. the CPU for system startup, the first CPU for power-on operation, in the embodiment of the present invention, the CPU1) can be obtained.
In the system startup phase, the corresponding processor DATA memory spaces, such as CPU2 DATA AREA, CPU3 DATA AREA,. CPUN DATA AREA shown in fig. 2, are dynamically allocated to other CPUs, and the OFFSET between the start address of the processor DATA memory space corresponding to each CPU and the start address of CPU1 DATA AREA is calculated, where OFFSET2 represents the OFFSET between CPU2 and CPU1 DATA AREA, and OFFSET n represents the OFFSET between CPUN and CPU1 DATA AREA. The OFFSET2, OFFSET3,. the OFFSETN is then saved into a global array by CPU sequence number so that the array index just corresponds to the CPU sequence number.
When the system is in normal operation, if a certain CPU needs to access the processor DATA of the CPU, firstly, the address of the corresponding type processor DATA in the CPU1 DATA AREA is obtained, then, the offset between the CPU and the processor DATA AREA of the CPU1 is obtained according to the CPU number, and finally, the address of the processor DATA is obtained by adding the CPU and the processor DATA. If CPUN needs to access its DATA1 DATA, it first obtains the DATA1 DATA address of CPU1, and then adds the offset of the CPU's processor DATA area. Among them, the DATA1 of the CPU1 is a common variable whose address can be directly acquired by taking an address symbol (i.e., "&"); the OFFSETN is an offset address calculated in the starting stage, is stored in the global array and can be directly acquired through the CPU number. The DATA1 DATA address of CPUN can be obtained by simply adding two addresses, so as to access the CPUN. And the whole access step can be encapsulated into a common statement in a macro definition mode, so that the program is very concise:
#define GET_CPU_DATA(cpu_id,DATA)(Addr(DATA)+OFFSET(cpu_id))
according to the embodiment of the invention, through the memory layout and the code flow, the processor data areas of different CPUs are separated from each other and cannot be in the same Cache line, so that the problem of low efficiency caused by Cache bump can be effectively prevented.
In the embodiment of the invention, when the processor data memory space is distributed for the processor, certain intervals exist among the processor data memory spaces of different processors; and according to the interval between the processor data memory spaces, performing memory boundary crossing detection on the access of the processor data, and preventing the memory boundary crossing from causing system risks.
The system of the embodiment of the present invention, referring to fig. 3, includes: a compiling link module 100 for defining processor-specific code segments in a program, adding the code segments to a link script file, and linking all processor data to the code segments through a compiling option; a starting allocation module 200, configured to allocate processor data memory spaces for the multiple processors respectively when the system is started, and record offset values of the processor data memory spaces; and the access control module 300 is configured to, when the system runs, obtain an offset value of a memory space of the processor data according to the processor identification number, and access the processor data.
Although specific embodiments have been described herein, those of ordinary skill in the art will recognize that many other modifications or alternative embodiments are equally within the scope of this disclosure. For example, any of the functions and/or processing capabilities described in connection with a particular device or component may be performed by any other device or component. In addition, while various illustrative implementations and architectures have been described in accordance with embodiments of the present disclosure, those of ordinary skill in the art will recognize that many other modifications of the illustrative implementations and architectures described herein are also within the scope of the present disclosure.
Certain aspects of the present disclosure are described above with reference to block diagrams and flowchart illustrations of systems, methods, systems, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by executing computer-executable program instructions. Also, according to some embodiments, some blocks of the block diagrams and flow diagrams may not necessarily be performed in the order shown, or may not necessarily be performed in their entirety. In addition, additional components and/or operations beyond those shown in the block diagrams and flow diagrams may be present in certain embodiments.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special purpose hardware and computer instructions.
Program modules, applications, etc. described herein may include one or more software components, including, for example, software objects, methods, data structures, etc. Each such software component may include computer-executable instructions that, in response to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed.
The software components may be encoded in any of a variety of programming languages. An illustrative programming language may be a low-level programming language, such as assembly language associated with a particular hardware architecture and/or operating system platform. Software components that include assembly language instructions may need to be converted by an assembler program into executable machine code prior to execution by a hardware architecture and/or platform. Another exemplary programming language may be a higher level programming language, which may be portable across a variety of architectures. Software components that include higher level programming languages may need to be converted to an intermediate representation by an interpreter or compiler before execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a scripting language, a database query or search language, or a report writing language. In one or more exemplary embodiments, a software component containing instructions of one of the above programming language examples may be executed directly by an operating system or other software component without first being converted to another form.
The software components may be stored as files or other data storage constructs. Software components of similar types or related functionality may be stored together, such as in a particular directory, folder, or library. Software components may be static (e.g., preset or fixed) or dynamic (e.g., created or modified at execution time).
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (10)

1. A method for accessing processor data based on a multiprocessor system, comprising the steps of:
s100, defining processor-specific code segments in a program, adding the code segments to a link script file, and linking all processor data to the code segments through compiling options;
s200, starting a system, respectively distributing processor data memory spaces for a plurality of processors, and recording offset values of the processor data memory spaces;
s300, when the system runs, the processor acquires the deviant of the processor data memory space according to the identification number and accesses the processor data.
2. The multiprocessor system-based processor data access method of claim 1, wherein the step S100 further comprises: and after compiling and linking, obtaining the processor data memory space of a first processor through the starting and ending positions of the code segments obtained by the link script file, wherein the first processor is used for starting a system.
3. The multiprocessor system-based processor data access method of claim 2, wherein the step S200 comprises:
s210, starting a system, and distributing the processor data memory space for a plurality of second processors according to a starting sequence, wherein the second processors are processors except the first processor in the system;
s220, calculating the offset value of the processor data memory space corresponding to the second processor according to the starting address of the processor data memory space of the first processor.
4. The multiprocessor system-based processor data access method of claim 3, wherein the offset value of the processor data memory space corresponding to the second processor is calculated by: subtracting a starting address of the processor data memory space of the first processor from a starting address of the processor data memory space of the second processor.
5. The multiprocessor system-based processor data access method of claim 1, wherein the step S200 further comprises: and storing the deviant of the memory space of the processor into a global array according to the identification number of the processor.
6. The multiprocessor system-based processor data access method of claim 1, wherein the step S200 further comprises: and when the processor data memory space is distributed to the processors, certain intervals exist among the processor data memory spaces of different processors.
7. The multiprocessor system-based processor data access method of claim 6, further comprising: and according to the interval between the processor data memory spaces, performing memory out-of-range detection on the access of the processor data.
8. The multiprocessor system-based processor data access method of claim 1, wherein the step S300 further comprises: and the processor accesses the processor data in a macro definition mode.
9. A multiprocessor system-based processor data access management apparatus, using the method of any of claims 1 to 8, comprising:
a compiling link module for defining processor-specific code segments in a program, adding the code segments to a link script file, and linking all processor data to the code segments through a compiling option;
the system comprises a starting distribution module, a data memory space management module and a data memory space management module, wherein the starting distribution module is used for respectively distributing processor data memory spaces for a plurality of processors and recording offset values of the processor data memory spaces when the system is started;
and the access control module is used for acquiring the deviant of the processor data memory space according to the processor identification number when the system runs and accessing the processor data.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN202011617431.4A 2020-12-30 2020-12-30 Processor data access method and management device based on multiprocessor system Pending CN112667300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011617431.4A CN112667300A (en) 2020-12-30 2020-12-30 Processor data access method and management device based on multiprocessor system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011617431.4A CN112667300A (en) 2020-12-30 2020-12-30 Processor data access method and management device based on multiprocessor system

Publications (1)

Publication Number Publication Date
CN112667300A true CN112667300A (en) 2021-04-16

Family

ID=75411431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011617431.4A Pending CN112667300A (en) 2020-12-30 2020-12-30 Processor data access method and management device based on multiprocessor system

Country Status (1)

Country Link
CN (1) CN112667300A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566973A (en) * 2012-02-15 2012-07-11 上海大学 Dynamic allocation method for instruction memory cell for multi-core heterogeneous system
CN103377131A (en) * 2012-04-13 2013-10-30 索尼公司 Data processing device and data processing method
CN103793255A (en) * 2014-02-27 2014-05-14 重庆邮电大学 Configurable multi-main-mode multi-OS-inner-core real-time operating system structure and starting method
CN104536764A (en) * 2015-01-09 2015-04-22 浪潮(北京)电子信息产业有限公司 Program running method and device
CN106776186A (en) * 2016-12-29 2017-05-31 湖南国科微电子股份有限公司 CPU running statuses adjustment method and system under a kind of multi-CPU architecture
CN111124921A (en) * 2019-12-25 2020-05-08 北京字节跳动网络技术有限公司 Memory out-of-range detection method, device, equipment and storage medium
CN111400202A (en) * 2020-03-13 2020-07-10 宁波中控微电子有限公司 Addressing method and module applied to on-chip control system and on-chip control system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566973A (en) * 2012-02-15 2012-07-11 上海大学 Dynamic allocation method for instruction memory cell for multi-core heterogeneous system
CN103377131A (en) * 2012-04-13 2013-10-30 索尼公司 Data processing device and data processing method
CN103793255A (en) * 2014-02-27 2014-05-14 重庆邮电大学 Configurable multi-main-mode multi-OS-inner-core real-time operating system structure and starting method
CN104536764A (en) * 2015-01-09 2015-04-22 浪潮(北京)电子信息产业有限公司 Program running method and device
CN106776186A (en) * 2016-12-29 2017-05-31 湖南国科微电子股份有限公司 CPU running statuses adjustment method and system under a kind of multi-CPU architecture
CN111124921A (en) * 2019-12-25 2020-05-08 北京字节跳动网络技术有限公司 Memory out-of-range detection method, device, equipment and storage medium
CN111400202A (en) * 2020-03-13 2020-07-10 宁波中控微电子有限公司 Addressing method and module applied to on-chip control system and on-chip control system

Similar Documents

Publication Publication Date Title
JP5255348B2 (en) Memory allocation for crash dump
CN1894662B (en) Processor cache memory as ram for execution of boot code
US4812981A (en) Memory management system improving the efficiency of fork operations
US11556348B2 (en) Bootstrapping profile-guided compilation and verification
CN107615243B (en) Method, device and system for calling operating system library
KR20200135718A (en) Method, apparatus, device and storage medium for managing access request
US9465594B2 (en) Distributed implementation of sequential code that includes a future
US20230289187A1 (en) Method and apparatus for rectifying weak memory ordering problem
CN102388370A (en) Computer process management
CN112667246A (en) Application function extension method and device and electronic equipment
US20160019031A1 (en) Method and system for processing memory
US9442790B2 (en) Computer and dumping control method
US10379827B2 (en) Automatic identification and generation of non-temporal store and load operations in a dynamic optimization environment
US20080005726A1 (en) Methods and systems for modifying software applications to implement memory allocation
US20090187911A1 (en) Computer device with reserved memory for priority applications
KR102658600B1 (en) Apparatus and method for accessing metadata when debugging a device
CN112667300A (en) Processor data access method and management device based on multiprocessor system
US20130097357A1 (en) Method for identifying memory of virtual machine and computer system thereof
KR102456017B1 (en) Apparatus and method for file sharing between applications
KR100727627B1 (en) Method for supporting application using dynamic linking library and system using the method
CN111221535B (en) Thread allocation method, server and computer readable storage medium
US20160253120A1 (en) Multicore programming apparatus and method
US8769221B2 (en) Preemptive page eviction
CN116450966A (en) Cache access method and device, equipment and storage medium
JP3293821B2 (en) Dynamic link system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination