CN115686774A - High-efficiency read-write strategy supporting data priority and implementation method - Google Patents

High-efficiency read-write strategy supporting data priority and implementation method Download PDF

Info

Publication number
CN115686774A
CN115686774A CN202211180641.0A CN202211180641A CN115686774A CN 115686774 A CN115686774 A CN 115686774A CN 202211180641 A CN202211180641 A CN 202211180641A CN 115686774 A CN115686774 A CN 115686774A
Authority
CN
China
Prior art keywords
data
priority
task
write
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211180641.0A
Other languages
Chinese (zh)
Inventor
施志强
陈树峰
李明磊
韩伟伦
张杨
王仁
蒋志翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and Applications filed Critical Beijing Institute of Computer Technology and Applications
Priority to CN202211180641.0A priority Critical patent/CN115686774A/en
Publication of CN115686774A publication Critical patent/CN115686774A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to an efficient read-write strategy supporting data priority and an implementation method, and belongs to the field of computer software. The invention solves certain application scenes with higher priority requirements and performance requirements on data at present on the basis of providing a basic data sending function. The task receiving and sending process supports the prior receiving and sending processing of high-priority data so as to ensure the priority order of resource use; the task execution process does not delay the execution of the subsequent function of the task because of waiting for a certain peripheral resource, and the condition of additionally occupying the CPU does not occur, so that the CPU resource executes other tasks or other functions in the task. On the premise of meeting basic functions, the invention additionally supports the priority function of data receiving and sending, has the advantages of blocking and not occupying CPU resources and has the advantage of not suspending tasks in a non-blocking or polling mode.

Description

High-efficiency read-write strategy supporting data priority and implementation method
Technical Field
The invention belongs to the field of computer software, and particularly relates to an efficient read-write strategy supporting data priority and an implementation method.
Background
The operation of any computer system is the result of the common efforts of software and hardware in the system, wherein a communication bridge between the software and the hardware is an equipment driver, and the existence of the equipment driver ensures that a software engineer does not need to care about the working principle of the hardware and the hardware engineer does not need to care about software design, thereby improving the software development and hardware design efficiency.
An advanced piece of hardware determines its performance upper bound, and a suitable set of drivers can approach its upper bound indefinitely in the operating system. For different operating systems at home and abroad: the drive frameworks of the Linux, the Windows, the Tyche, the Rt-thread, the Vxworks and the like are similar, and the interfaces dev _ open (), dev _ read (), dev _ write (), dev _ close () of the drive layer corresponding to open (), read (), write (), close () in system call, interrupt service programs in the drive and the like are realized. The most important of them is the strategy of the device read-write interface and the design of the interrupt service routine, and the design quality directly influences the device function and performance.
The traditional device driver mainly provides a linear data receiving and transmitting function, hardware is directly operated or is operated after simple software cache, and polling and interruption of a hardware device response mode are mainly combined through blocking and non-blocking IO of software in read-write performance.
The working mode of IO blocking is to suspend itself to wait for the release and acquisition of resources under the condition that the task does not obtain system resources, the release of resources generally depends on an interrupt service program, and after the hardware device triggers and receives the interrupt, the interrupt service program finishes processing data, releases the resources, and the task can be continuously executed. The non-blocked IO mode is returned directly when the task cannot acquire the resource. In the polling mode, the CPU queries each peripheral equipment in a certain period, and if yes, corresponding input/output service is carried out; if not, the CPU then queries the next peripheral. The interrupt mode is to notify the CPU to enable the interrupt service routine to process data when the peripheral device has data interaction. Data reading and writing are generally used in combination with hardware interrupt, and hardware polling is rarely used.
In the traditional equipment drive, the data structure of a data buffer area for data transceiving determines a single sequence sending mode of data, and for an application scene that the data needs to have a priority resource, the mode which cannot process the problem of data priority becomes a pain point of a user; in addition, the operation functions of reading, writing, dev _ read (), dev _ write () of blocking and non-blocking are utilized, the blocking and non-blocking modes are matched with the polling and interruption working modes of hardware to realize the input and output of equipment, the common transmission uses non-blocking at a software end, the hardware does not use polling and interruption, under the mode, the software end needs to inquire a hardware status register every time of transmission, judges whether the hardware is working, the transmission is executed when the hardware does not work, the transmission is directly returned when the hardware does not work, and the task is probably needed to execute the transmission of the same group of data frames for multiple times during the period, so that a large amount of invalid operations are carried out by a CPU, and the limited resources of the CPU are wasted. In the process, the task is suspended and waits to cause that the subsequent function of the software task cannot be executed, the running efficiency of the software is greatly reduced, and the method is unacceptable for users in a specific scene under the condition that the subsequent task cannot be executed.
Disclosure of Invention
Technical problem to be solved
The invention aims to solve the technical problem of how to provide an efficient read-write strategy supporting data priority and an implementation method thereof, so as to solve the problems that a CPU (central processing unit) in the existing method carries out a large amount of invalid operations, occupies and wastes limited resources of the CPU, greatly reduces the running efficiency of software and the like.
(II) technical scheme
In order to solve the above technical problem, the present invention provides an efficient read-write strategy supporting data priority and a method for implementing the same, wherein the method comprises the following steps:
s101, a write function write () in a task calls a drive dev _ write ()' in a system; the data sent by the write function is a data frame data _ frame;
s102, driving dev _ write () to firstly judge whether a hardware peripheral working state register is in a busy state, if not, executing steps S103-S104, and if so, executing steps S105-S106;
s103, driving dev _ write () to directly write the data frame into a hardware equipment sending register to execute sending;
s104, driving dev _ write () to return to the task, and executing a task subsequent function;
s105, driving dev _ write () to judge a priority field of the data frame;
s106, driving dev _ write () to write the data frame into the corresponding priority data buffer area;
s107, driving dev _ write () to return to the task, and continuing to execute the task;
s108, the hardware peripheral finishes sending, triggers sending completion interrupt, and executes an interrupt service program;
s109, the interrupt service program judges whether data exist in data buffer areas with different priorities step by step, preferentially sends the data in the data buffer area with high priority, if not, the step S110 is executed, and if yes, the steps S111-S112 are executed;
s110, the interrupt service program returns to the task and continues to execute the task;
s111, the interrupt service program takes out the data, updates the buffer area, writes the data into a hardware equipment sending register and executes sending;
and S112, the interrupt service program returns to the task and continues to execute the subsequent functions.
Further, the data frame is provided with a priority field, and different data buffers are set up for different priorities.
Further, the buffer is a singly linked list, circularly linked list or queue.
Further, a user applies for a segment of memory as a buffer area, and maintains the segment of memory by using a buffer area data structure, wherein the buffer area data structure comprises a priority, a starting address of the applied memory, a head memory address of a data frame, a tail memory address of the data frame, and the number of cached data frames, and the buffer area is used circularly.
Further, the buffer area is used circularly, the initial address points to the section of memory, and when the head data frame reaches the maximum value of the applied memory address, the applied initial memory address is returned to continue to use the section of memory space circularly.
A high-efficiency read-write strategy supporting data priority and a realization method thereof are disclosed, the method comprises the following steps:
s201, reading a function read () in a task to call a drive dev _ read ();
s202, driving dev _ read () to sequentially judge whether different priority buffers have data or not, if not, executing step S204, and if so, executing steps S203-S204;
s203, driving dev _ read () to read the data frame and update the corresponding priority buffer;
s204, driving dev _ read () to return the task, and executing a subsequent function of the task;
s205, the hardware peripheral receives the data, triggers to receive the interrupt, and enters an interrupt service program;
s206, the interrupt service routine judges the priority field of the received data frame;
s207, the interrupt service routine writes the received data frame into the corresponding priority buffer area and updates the buffer area;
and S208, returning the task by the interrupt service program and executing a subsequent function of the task.
Further, the data frame is provided with a priority field, and different data buffers are set up for different priorities.
Further, the buffer is a singly linked list, circularly linked list or queue.
Further, a user applies for a segment of memory as a buffer area, and maintains the segment of memory by using a data structure of the buffer area, wherein the data structure of the buffer area comprises a priority, a start address of the applied memory, a head memory address of a data frame, a tail memory address of the data frame, and the number of cached data frames, and the buffer area is recycled.
Further, the buffer area is used circularly, the initial address points to the section of memory, and when the head data frame reaches the maximum value of the applied memory address, the applied initial memory address is returned to continue to use the section of memory space circularly.
(III) advantageous effects
The invention provides a high-efficiency read-write strategy supporting data priority and an implementation method, and the invention has the following key points:
1. executing data transmission, namely controlling the transmission execution time of equipment in a task by using a state mechanism of a hardware peripheral and a cache mechanism of software, maximizing the task execution efficiency and improving the software efficiency;
2. by using a sending completion interrupt mechanism, an interrupt service program realizes self-fetching cache data and sending, and a CPU only executes effective operation;
3. a data frame and buffer data structure design supporting data receiving and sending priority meets the requirement of emergency information processing.
The invention mainly relates to a high-efficiency software design supporting a data priority read-write processing mode, and solves certain application scenes with higher priority requirements and performance requirements on data at present on the basis of providing a basic data sending function. The task receiving and sending process supports the prior receiving and sending processing of high-priority data so as to ensure the priority order of resource use; the task execution process does not delay the execution of the subsequent function of the task because of waiting for a certain peripheral resource (the equipment is idle or waiting for the transmission to be completed), and the condition that the CPU is additionally occupied (in a non-blocking or polling mode) does not occur, so that the CPU resource can be used for executing other tasks or other functions in the tasks. The software design of the invention can achieve the following effects: on the premise of meeting basic functions, the method additionally supports the priority function of data receiving and sending, and has the advantages of blocking and not occupying CPU resources and the advantages of not being suspended in a non-blocking mode or a polling mode in performance.
Drawings
FIG. 1 is a flow chart of data writing of the present invention;
FIG. 2 is a flow chart of data reading according to the present invention;
FIG. 3 is a diagram illustrating a data structure with a priority buffer according to the present invention.
Detailed Description
In order to make the objects, contents and advantages of the present invention more apparent, the following detailed description of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention solves the defects of the traditional writing mode of the existing drive: 1. supporting a data transceiving priority mechanism; 2, resource waste caused by invalid running of the CPU; 3. the suspension of tasks results in inefficient software operation.
The software design scheme of the invention is based on the characteristics of general peripheral hardware, and realizes an efficient data priority read-write supporting processing method by fully utilizing the idea of combining the characteristics of hardware sending completion interruption with software design.
For a scene with priority requirements for data receiving and transmitting, a data frame structure with priority fields is provided, different data buffers are set for different priorities, the buffers can be single linked lists, circular linked lists, queues and the like, the data buffers can set priorities of 0, 1 and 2 as required, and only two levels of priority data buffers are used for explanation in the invention, as shown in fig. 3.
Data frame structure
Figure BDA0003865109270000051
Data structure with priority buffer
Figure BDA0003865109270000052
In the cache mode, a user can apply for a section of memory as a buffer area according to the actual usage amount of a scene, and maintain the section of memory by using a data structure of the buffer area, wherein the data structure of the buffer area comprises priority, a starting address of the applied memory, a head memory address of a data frame, a tail memory address of the data frame and the number of cached data frames. The buffer area is used circularly, the initial address points to the section of memory, when the head data frame reaches the maximum value of the applied memory address, the applied initial memory address is returned to continue to use the section of memory space circularly, and the tail data frame is similar. It should be noted that compared with the dynamic expansion of the single linked list, the cache method requires the user to apply for a sufficient memory size to avoid the situation of insufficiency, but the data speed is higher than the linked list and the management is more convenient.
As shown in fig. 1, the data writing steps of the present invention are as follows:
s101, a write function write () in a task calls a drive dev _ write ()' in a system; the data sent by the write function is a data frame data _ frame;
s102, driving dev _ write (), firstly, judging whether a hardware peripheral working state register is in a busy state, if not, executing steps S103-S104, and if so, executing steps S105-S106;
the hardware peripheral working state register is an indicating bit of four bytes (two bytes or one byte is determined by hardware design), generally a 0 th bit is used as a specific indicating bit, 0-is working or not working, and 1 is opposite to 0.
S103, driving dev _ write () to directly write the data frame into a hardware device sending register to execute sending;
s104, driving dev _ write () to return to the task, and executing a subsequent function of the task;
s105, driving dev _ write () to judge a priority field of the data frame;
s106, driving dev _ write () to write the data frame into the corresponding priority data buffer area;
the buffer area may be any size, any data structure, such as a single linked list, and may be dynamically adjusted, or a whole block of memory area (the size is the number of data frames per size — fixed) applied in advance.
S107, driving dev _ write () to return to the task, and continuing to execute the task;
s108, the hardware peripheral finishes sending, triggers sending finishing interruption (the step is hardware finishing), and executes an interruption service program;
s109, the interrupt service program judges whether data exist in data buffer areas with different priorities step by step, the data in the data buffer area with high priority are sent preferentially, if not, the step S110 is executed, and if yes, the steps S111-S112 are executed;
s110, the interrupt service program returns to the task and continues to execute the task;
s111, the interrupt service program takes out the data, updates the buffer area, writes the data into a hardware equipment sending register and executes sending;
and S112, the interrupt service program returns to the task and continues to execute the subsequent functions.
As shown in fig. 2, in the receiving process, the invention uses the receiving interruption, after receiving the data frame, judges the priority of the data, stores the data in the corresponding priority buffer area, and waits for the task to take the data in the high priority data buffer area preferentially. The data reception is mainly divided into two stages, wherein the stage-one hardware buffers the data into a reception buffer after receiving the data (refer to the following steps S201 to S204), and the stage-two task reads the function into the reception buffer to take the data away (refer to the following steps S205 to S208).
The data reception is specifically as follows:
s201, reading a function read () in a task to call a drive dev _ read ();
s202, driving dev _ read () to sequentially judge whether different priority buffers have data or not, if not, executing step S204, and if so, executing steps S203-S204;
s203, driving dev _ read () to read the data frame and update the corresponding priority buffer;
s204, driving dev _ read () to return the task, and executing a subsequent function of the task;
s205, the hardware peripheral receives the data, triggers to receive the interrupt, and enters an interrupt service program;
s206, the interrupt service program judges the priority field of the received data frame;
s207, the interrupt service routine writes the received data frame into the corresponding priority buffer area and updates the buffer area;
and S208, returning the task by the interrupt service program and executing a subsequent function of the task.
The high-efficiency data priority read-write supporting processing mode of the invention is different from the traditional equipment driving sending mode, and the invention has the following key points:
1. the execution of data transmission utilizes a state mechanism of a hardware peripheral and a cache mechanism of software to enable the transmission execution time of equipment in a task to be controllable, the task execution efficiency is maximized, and the software efficiency is improved;
2. completion interrupt mechanism with sendInterrupt service routine implementationSelf-fetching the cache data and sending, and only executing effective operation by the CPU;
3. a data frame and buffer data structure design supporting data receiving and sending priority meets the requirement of emergency information processing.
The invention mainly relates to a high-efficiency software design supporting a data priority read-write processing mode, and solves certain application scenes with higher priority requirements and performance requirements on data at present on the basis of providing a basic data sending function. The task receiving and sending process supports the prior receiving and sending processing of high-priority data so as to ensure the priority order of resource use; the task execution process does not delay the execution of the subsequent function of the task because a certain peripheral resource is waited for (the equipment is idle or waiting for the transmission to be completed), and the condition that the CPU is additionally occupied (in a non-blocking or polling mode) does not occur, so that the CPU resource is made to execute other tasks or other functions in the tasks. The software design of the invention can achieve the following effects: on the premise of meeting basic functions, the method additionally supports the priority function of data receiving and sending, has the advantages of blocking and not occupying CPU resources and has the advantage that tasks cannot be suspended in a non-blocking or polling mode.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An efficient read-write strategy supporting data priority and an implementation method thereof are characterized in that the method comprises the following steps:
s101, a write function write () in a task calls a drive dev _ write ()' in a system; the data sent by the write function is a data frame data _ frame;
s102, driving dev _ white () to firstly judge whether a hardware peripheral work state register is in a busy state, if not, executing steps S103-S104, and if so, executing steps S105-S106;
s103, driving dev _ write () to directly write the data frame into a hardware device sending register to execute sending;
s104, driving dev _ write () to return to the task, and executing a task subsequent function;
s105, driving dev _ write () to judge a priority field of the data frame;
s106, driving dev _ write () to write the data frame into the corresponding priority data buffer;
s107, driving dev _ write () to return to the task, and continuing to execute the task;
s108, the hardware peripheral finishes sending, triggers sending completion interrupt, and executes an interrupt service program;
s109, the interrupt service program judges whether data exist in data buffer areas with different priorities step by step, the data in the data buffer area with high priority are sent preferentially, if not, the step S110 is executed, and if yes, the steps S111-S112 are executed;
s110, the interrupt service program returns to the task and continues to execute the task;
s111, the interrupt service program takes out the data, updates the buffer area, and writes the data into a hardware equipment sending register to execute sending;
and S112, returning the task by the interrupt service program, and continuously executing the subsequent functions.
2. The efficient data priority enabled read and write strategy and implementation method of claim 1 wherein data frames are provided with priority fields and different data buffers are set up for different priorities.
3. The efficient data priority enabled read and write strategy and implementation of claim 2 wherein the buffer is a singly linked list, circularly linked list or queue.
4. The efficient data priority supporting read-write strategy and implementation method according to claim 3, wherein a user applies for a segment of memory as a buffer and maintains the segment of memory using a buffer data structure, the buffer data structure includes priority, a start address of the applied memory, a head memory address of a data frame, a tail memory address of the data frame, and a number of buffered data frames, and the buffer is used in a cycle.
5. The efficient data priority supported read/write strategy and implementation method of claim 1, wherein the buffer is recycled, the start address is pointed to the segment of memory, and when the header data frame reaches the maximum value of the applied memory address, the requested start memory address is returned to continue to recycle the segment of memory space.
6. An efficient read-write strategy supporting data priority and an implementation method thereof are characterized in that the method comprises the following steps:
s201, reading a function read () in a task to call a drive dev _ read ();
s202, driving dev _ read () to sequentially judge whether different priority buffers have data or not, if not, executing step S204, and if so, executing steps S203-S204;
s203, driving dev _ read () to read the data frame and update the corresponding priority buffer;
s204, driving dev _ read () to return the task, and executing a subsequent function of the task;
s205, the hardware peripheral receives the data, triggers to receive the interrupt, and enters an interrupt service program;
s206, the interrupt service program judges the priority field of the received data frame;
s207, the interrupt service program writes the received data frame into the corresponding priority buffer area and updates the buffer area;
and S208, returning the task by the interrupt service program, and executing a subsequent function of the task.
7. The efficient data priority enabled read-write strategy and implementation method as claimed in claim 6, wherein the data frame is provided with a priority field, and different data buffers are set up for different priorities.
8. The efficient data priority enabled read-write strategy and implementation method of claim 7 wherein the buffer is a singly linked list, circularly linked list or queue.
9. The efficient data priority supported read-write strategy and implementation method of claim 8, wherein a user applies for a segment of memory as a buffer and maintains the segment of memory using a buffer data structure, the buffer data structure comprises priority, a start address of the applied memory, a head memory address of a data frame, a tail memory address of the data frame, and a number of buffered data frames, and the buffer is used in a cycle.
10. The efficient data priority supported read/write strategy and implementation method of claim 6, wherein the buffer is recycled, the start address points to the segment of memory, and when the header data frame reaches the maximum value of the applied memory address, the requested start memory address is returned to continue to recycle the segment of memory space.
CN202211180641.0A 2022-09-26 2022-09-26 High-efficiency read-write strategy supporting data priority and implementation method Pending CN115686774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211180641.0A CN115686774A (en) 2022-09-26 2022-09-26 High-efficiency read-write strategy supporting data priority and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211180641.0A CN115686774A (en) 2022-09-26 2022-09-26 High-efficiency read-write strategy supporting data priority and implementation method

Publications (1)

Publication Number Publication Date
CN115686774A true CN115686774A (en) 2023-02-03

Family

ID=85062759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211180641.0A Pending CN115686774A (en) 2022-09-26 2022-09-26 High-efficiency read-write strategy supporting data priority and implementation method

Country Status (1)

Country Link
CN (1) CN115686774A (en)

Similar Documents

Publication Publication Date Title
KR100326864B1 (en) Network communication method and network system
JP3549081B2 (en) Task execution control method with priority and data processing device
CN1153149C (en) Modem operated in non-real-time condition, software implementation in general purpose computer
EP0798638B1 (en) Periodic process scheduling method
CN100517221C (en) Efficient multiprocessor system and methods thereof
WO2023082560A1 (en) Task processing method and apparatus, device, and medium
JPH0816540A (en) Message communication system for parallel computer
CN114610472B (en) Multi-process management method in heterogeneous computing and computing equipment
CN115086310B (en) High-throughput low-delay data packet forwarding method
JP2904483B2 (en) Scheduling a periodic process
CN115686774A (en) High-efficiency read-write strategy supporting data priority and implementation method
CN116225741A (en) Heterogeneous multi-core inter-core communication scheduling method
US12019909B2 (en) IO request pipeline processing device, method and system, and storage medium
CN108241770B (en) Message response distributed simulation method based on reflective memory network
JP4317348B2 (en) Information processing apparatus, input / output method, and program
JPH06301655A (en) Distributed processing system
JP2000502202A (en) Instruction Processor Job Scheduling
CN112631975B (en) SPI transmission method based on Linux
EP4345622A1 (en) Service process calling method and related device
CN115687200B (en) PCIe data transmission method and system applied to EPA based on FPGA
JP2000322278A (en) Process execution controlling method
CN117312202B (en) System on chip and data transmission method for system on chip
WO2024037482A1 (en) Interrupt message processing method and apparatus
JP3653176B2 (en) Process execution control method
CN118041877A (en) Queue refreshing method for TTE (time to live) end system shared cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination