CN117076346A - Application program data processing method and device and electronic equipment - Google Patents

Application program data processing method and device and electronic equipment Download PDF

Info

Publication number
CN117076346A
CN117076346A CN202310914139.6A CN202310914139A CN117076346A CN 117076346 A CN117076346 A CN 117076346A CN 202310914139 A CN202310914139 A CN 202310914139A CN 117076346 A CN117076346 A CN 117076346A
Authority
CN
China
Prior art keywords
memory
data
capacity
cache
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310914139.6A
Other languages
Chinese (zh)
Inventor
孙丞廉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Loongson Zhongke Chengdu Technology Co ltd
Original Assignee
Loongson Zhongke Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Loongson Zhongke Chengdu Technology Co ltd filed Critical Loongson Zhongke Chengdu Technology Co ltd
Priority to CN202310914139.6A priority Critical patent/CN117076346A/en
Publication of CN117076346A publication Critical patent/CN117076346A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application provides an application program data processing method, and relates to the technical field of electronic equipment. The application program data processing method comprises the following steps: acquiring a first capacity of a memory pool allocated for an application program during initialization, wherein the memory pool is at least used for storing received network data messages and packet receiving descriptors; controlling at least part of data in the cache memory to be consistent with all data in the memory pool all the time under the condition that the first capacity is smaller than the capacity threshold value; and under the condition that the first capacity is larger than or equal to the capacity threshold, acquiring a second capacity of a first memory block in the memory pool, and under the condition that the second capacity is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time, wherein the first memory block is used for storing network data messages and/or packet receiving descriptors. The application reduces the access times of the processor to the memory and reduces the performance cost of the processor.

Description

Application program data processing method and device and electronic equipment
Technical Field
The present application relates to the field of electronic devices, and in particular, to a method and an apparatus for processing application data, and an electronic device.
Background
In recent years, with the popularization of network data packet transceiving technology, such as a data plane development kit (Data Plane Development Kit, DPDK), more and more network Input/Output (IO) applications are optimized by using the DPDK.
Currently, the process of receiving network data messages by a DPDK-based application program includes: the network card needs to store the network data message into a pre-allocated memory pool, and update a packet receiving descriptor corresponding to the network data message in the memory pool into a target packet receiving descriptor. The target packet receiving descriptor is used for indicating successful packet receiving and the storage address of the corresponding network data message. When the central processing unit (central processing unit, CPU) determines that the target packet receiving descriptor exists in the training memory pool, the central processing unit reads the network data message from the memory pool based on the target packet receiving descriptor and supplies application program processing through the DPDK interface. However, the CPU needs to access the memory multiple times in this process, so that the performance overhead of the CPU in this process is large.
Disclosure of Invention
In view of the above problems, embodiments of the present application provide an application data processing method, apparatus, and electronic device, so as to solve the problem that in a process of receiving a network data packet by a DPDK-based application, a processor frequently accesses a memory, so that the performance overhead of the processor is relatively high.
In order to solve the above problems, an embodiment of the present application discloses an application data processing method, which includes:
acquiring a first capacity of a memory pool allocated for the application program during initialization, wherein the memory pool is at least used for storing received network data messages and packet receiving descriptors;
when the first capacity is smaller than a capacity threshold value, controlling at least part of data in a cache memory to be consistent with all data in the memory pool all the time, so that a processor reads the data stored in the memory pool from the cache memory and then processes the data;
and under the condition that the first capacity is larger than or equal to the capacity threshold, acquiring a second capacity of a first memory block in the memory pool, and under the condition that the second capacity is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time, so that the processor reads the data stored in the first memory block from the cache memory and then processes the data.
The embodiment of the application also discloses an application program data processing device, which comprises:
The acquisition module is used for acquiring a first capacity of a memory pool allocated for the application program during initialization, wherein the memory pool is at least used for storing received network data messages and packet receiving descriptors;
the control module is used for controlling at least partial data in the cache memory to be consistent with all data in the memory pool all the time under the condition that the first capacity is smaller than a capacity threshold value, so that the processor reads the data stored in the memory pool from the cache memory and then processes the data;
the control module is further configured to obtain a second capacity of the first memory block in the memory pool when the first capacity is greater than or equal to the capacity threshold, and control at least part of data in the cache memory to be consistent with all data in the first memory block all the time when the second capacity is less than the capacity threshold, so that the processor reads the data stored in the first memory block from the cache memory and then processes the data.
An embodiment of the application also discloses an electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to perform the method of any of the preceding first aspects by one or more processors.
Embodiments of the application also disclose a readable storage medium, which when executed by a processor of an electronic device, causes the processor to perform the method of any of the preceding first aspects.
The embodiment of the application has the following advantages:
in the embodiment of the application, the first capacity of the memory pool allocated to the application program during initialization is acquired, so that at least part of data in the cache memory and all data in the memory pool are controlled to be consistent all the time under the condition that the first capacity is smaller than the capacity threshold value, and all the data in the memory pool are synchronized into the cache memory. Or when the first capacity is greater than or equal to the capacity threshold and the second capacity of the first memory block in the memory pool is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time so as to synchronize all the data in the first memory block into the cache memory. The memory pool is at least used for storing network data messages and packet receiving descriptors received by the network card. The first memory block is used for storing network data messages and/or packet receiving descriptors. In the technical scheme, the data in the cache memory and at least part of the data in the memory pool are controlled to be consistent, so that in the process of receiving the network data message by the DPDK-based application program, the processor can directly access the cache memory to read the packet receiving descriptor and/or the network data message from the cache memory without accessing the memory to read the packet receiving descriptor and/or the network data message from the memory pool. Compared with the related art, the method reduces the access times of the processor to the memory and reduces the performance cost of the processor.
Drawings
FIG. 1 is a flowchart of an application data processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of another method for processing application data according to an embodiment of the present application;
FIG. 3 is a flow chart of yet another application data processing method provided by an embodiment of the present application;
FIG. 4 is a flow chart of a method for processing application data according to another embodiment of the present application;
FIG. 5 is a flow chart of another application data processing method according to another embodiment of the present application;
FIG. 6 illustrates a block diagram of yet another application data processing apparatus provided by an embodiment of the present application;
fig. 7 shows a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description.
Referring to fig. 1, a flowchart of an application data processing method according to an embodiment of the application is shown. Optionally, the data processing method is applied to a DPDK system of the electronic device. Is executed by a processor of the DPDK system. The DPDK system of the electronic device further includes: network cards and caches (caches). Wherein the processor is a reduced instruction set computer (Reduced Instruction Set Computer, RISC) processor supporting a lock cache. The DPDK-based application data processing method includes a configuration phase method by which the processor is caused to read all or part of the data stored in the memory pool from the cache memory. The application program data processing method comprises the following steps:
Configuration phase:
step 101, obtaining a first capacity of a memory pool allocated for an application program during initialization, where the memory pool is at least used for storing received network data messages and packet receiving descriptors.
When the DPDK system is initialized, a memory pool (memboost) is allocated for an application program from a memory, and memory blocks for storing various data are allocated from the memory pool. The memory pool is at least used for storing network data messages, packet receiving descriptors and packet sending descriptors which are sent to the application program. The packet receiving descriptor comprises a packet receiving flag bit and an address flag bit. The packet receiving identification bit is used for indicating whether the packet receiving is successful. The address flag bit is used for carrying a storage address of the network data message corresponding to the packet receiving descriptor. The application program is an application for performing network IO based on a DPDK system.
In the embodiment of the present application, the network data packet is generally encapsulated by using an Mbuf (struct rte_mbuf) structure. Based on this, the memory pool can be considered to store an Mbuf (struct rte_mbuf) structure, a packet reception descriptor, and a packet transmission descriptor. In an alternative case, the memory pool includes a ring buffer (buffer ring) for storing the Mbuf structure for storing the network data packets. Thus, the starting address of the memory block for storing the network data packet according to the embodiment of the present application may be understood as the starting address of the ring buffer, and the capacity of the memory block may be understood as the capacity of the ring buffer.
Optionally, the process of obtaining, by the processor, the first capacity of the memory pool allocated for the application program at the time of initialization, that is, the size of the memory pool may include: the processor acquires the first capacity of the memory pool from the initialization configuration code corresponding to the DPDK system.
Step 102, judging whether the first capacity is smaller than a capacity threshold. If yes, go to step 103; if not, go to step 104.
In the embodiment of the application, after the first capacity of the memory pool is acquired, the first capacity and the capacity threshold value can be compared to determine whether the capacity of the cache memory can store data with the same size as the first capacity. Wherein the capacity threshold may be less than or equal to the capacity of the cache memory. By way of example, assume that the capacity of the cache is 16MB. The capacity threshold may be 15MB.
When the capacity threshold is smaller than the capacity of the cache memory, even if the cache memory is controlled to always store data with the capacity threshold, the cache memory can be kept with partial space to process other data, and the compatible processing capacity of the cache memory is ensured.
Step 103, controlling at least part of data in the cache memory to be consistent with all data in the memory pool all the time, so that the processor reads the data stored in the memory pool from the cache memory and processes the data.
In the embodiment of the application, when the first capacity is smaller than the capacity threshold, the capacity of the cache memory can store the data with the same size as the first capacity. The processor controls at least part of the data in the cache memory to be consistent with all the data in the memory pool all the time, namely controls at least part of the cache blocks (namely the cache blocks) in the cache processor to store the data in the memory pool all the time. Thus, upon receiving a DMA read process for an address in the memory pool, the processor may hit the data for that address in the cache and read the data.
In the embodiment of the application, a DPDK system stores a plurality of packet receiving descriptors in a memory pool in advance in the initialization process. The packet reception descriptor indicates unsuccessful packet reception. After receiving the network data message, the network card stores the network data message into a memory pool, and updates a packet receiving descriptor corresponding to the network data message in the memory pool into a target packet receiving descriptor. The target packet receiving descriptor is used for indicating successful packet receiving and the storage address of the corresponding network data message in the memory pool. The processor needs to train the packet reception descriptors in the memory pool to determine whether the target packet reception descriptors exist in the packet reception descriptors in the memory pool. The processor obtains a storage address of the network data message in the packet reception descriptor to read the network data message based on the storage address to provide application processing in the case that the target packet reception descriptor is determined to exist.
On the basis that at least part of data in the cache memory is consistent with all data in the memory pool all the time, the processor can hit a packet receiving descriptor of a target address in the cache memory and read the packet receiving descriptor when reaching a descriptor polling time, namely reaching a DMA reading processing time of the target address. And judging whether the packet receiving descriptor is a target packet receiving descriptor. And under the condition that the existence of the target packet receiving descriptor is determined, acquiring the storage address of the network data message in the packet receiving descriptor. Hit the network data message of the memory address in the cache memory, and read the network data message. The target address is an address of a memory block in the memory pool for storing the packet receiving descriptor.
Optionally, a buffer block in the cache processor for storing data in the memory pool has a mapping relationship with a memory block corresponding to the memory pool. The processor may convert the address of the memory block into the address of the cache block based on the mapping relationship to hit the data of the memory block and read the data.
For example, when the first capacity is smaller than the capacity threshold, the processor controls a target cache block in the cache memory to store data of the memory pool, wherein the target cache block is a cache block in the cache memory, which is in a mapping relationship with a memory block corresponding to the memory pool. When the processor reaches the descriptor polling time, the target address is converted into the address of the corresponding cache block, and the packet receiving descriptor is read based on the address. And under the condition that the packet receiving descriptor is determined to be the target packet receiving descriptor, acquiring the storage address of the network data message in the packet receiving descriptor. And converting the storage address into the address of the corresponding cache block, and reading the network data message based on the address.
Further alternatively, the process of controlling by the processor that at least a portion of the data in the cache memory remains consistent with all of the data in the memory pool may include: the data of the memory pool is locked to the cache. Data locked to the cache is not replaced out of the cache.
In an embodiment of the present application, the processor is a RISC processor that supports a lock cache. It may lock the cache region of the cache memory such that the cache blocks within the locked cache region are not replaced out of the cache memory, such that the data of the cache blocks stored within the locked cache region are not replaced out of the cache memory.
Alternatively, the process of the processor locking the data of the memory pool to the cache may comprise: at least one set of lock window registers of the cache memory is configured based on the first capacity and a starting address of the memory pool such that the at least one set of lock window registers is in an active state and a locked cache area of the lock window registers in the active state includes at least a first cache block. The first cache block stores data of the memory pool, and the data stored by the first cache block is consistent with the data of the memory pool all the time.
As described above, when the DPDK system is initialized, a memory pool is allocated for the application program from the memory. Therefore, optionally, the starting address of the memory pool may be obtained from the initialization configuration code corresponding to the DPDK system. In an embodiment of the application, the cache memory may include at least one set of lock window registers. The lock window register is used for locking a cache area of the cache memory, which stores all data in a target memory block in a memory pool, based on a lock address of a lock window and a lock window mask when the lock window register is in an active state. Namely, the locked cache area of the lock window register is a cache area which always stores all data of one target memory block. The lock address of the lock window is the starting address of the target memory block. The lock window mask is the size of the target memory block. Based on this, it may also be considered that the lock window register is utilized to lock the segment of memory of the target memory block to the cache.
Optionally, the processor configures the lock window valid bit, the lock address of the lock window, and the lock window mask of at least one set of lock window registers of the cache based on the first capacity and the starting address of the memory pool such that the at least one set of lock window registers is in a valid state and the locked cache area of the lock window registers in the valid state includes at least the first cache block. The data stored by the first cache block is consistent with the data of the memory pool all the time.
For example, if the lock address of the lock window of the set of lock window registers is set to the starting address of the first memory block, the lock window mask is the size of the first memory block. The set of lock window registers is used to lock a cache area in the cache memory that stores all data in the first memory block when in an active state, such that the cache area is always used to store data in the first memory block, i.e., the data in the cache area always matches the data in the first memory block. For another example, if the lock address of the lock window of the set of lock window registers is set to the starting address of the second memory block, the lock window mask is the size of the second memory block. The set of lock window registers is used to lock a cache area in the cache memory that stores all data in the second memory block when in an active state, such that the cache area is always used to store data in the second memory block, i.e., the data in the cache area always matches the data in the second memory block.
For another example, if the lock address of the lock window of the set of lock window registers is set to the start address 0000000000, the lock window mask is 128k. The set of lock window registers is used to lock a cache region in the cache memory where all data within a target memory block of size 128k (i.e., the target memory block has an address of 0x00000000-0 xFFFFFFFF) is stored with a starting address of 0x00000000 in the memory pool while in an active state, such that the cache region is always used to store data within the memory pool having an address range of 0x00000000-0 xFFFFFFFF.
However, the lock window mask of each set of lock window registers has a range of values. Thus, the number of lock window registers needs to be determined based on the first capacity of the memory pool allocated for the application at initialization. In one alternative implementation, the processor is to configure a first number of lock window registers based on a first capacity and a maximum value of the lock window mask. The lock window valid bit, lock address of the lock window, and lock window mask of the first number of sets of lock window registers are configured such that the first number of sets of lock window registers are all in a valid state. And the locked cache area of each set of lock window registers in the active state stores a portion of the data of the memory pool such that the locked cache areas of all lock window registers in the active state store all of the data of the memory pool. The memory block from which the data stored in each locked cache area comes is a partial memory block of the memory pool, and the memory block is consistent with the data in the partial memory block all the time. It is also contemplated that each set of lock window registers locks a portion of the memory pool to the cache memory such that all lock window registers in an active state lock that portion of the memory pool to the cache memory.
In one example, the first number is 1 in the case where the maximum value of one lock window mask is greater than the first capacity. The processor configures a lock window valid bit of a set of lock window registers to 1, a lock address of the lock window to be a starting address of the first memory pool, and a lock window mask to be a value of the first capacity.
For another example, in the case where the maximum value of one lock window mask is smaller than the first capacity, it is assumed that the first number is 2. The processor configures a lock window valid bit of the first set of lock window registers to 1, a lock address of the lock window to a starting address of the first memory pool, a lock window mask to a first value, a lock window valid bit of the second set of lock window registers to 1, a lock address of the lock window to the first address, and a lock window mask to a second value. The first address is an address of the first memory pool after the initial address is shifted by a first value. The sum of the first value and the second value is the value of the first capacity.
Step 104, obtaining the second capacity of the first memory block in the memory pool.
In the embodiment of the application, when the first capacity is greater than or equal to the capacity threshold, the capacity of the cache memory cannot store the data with the same size as the first capacity. The processor may control at least a portion of the data in the cache to be consistent with a portion of the data in the memory pool. The portion of data may be network data messages and/or packet descriptors.
In the embodiment of the application, when a memory pool is allocated for an application program in the initialization process of the DPDK system, memory blocks are allocated from the memory pool for various data in the DPDK transmission. Optionally, the processor may acquire the second capacity of the first memory block from an initialization configuration code corresponding to the DPDK system.
And step 105, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time when the second capacity is smaller than the capacity threshold value, so that the processor reads the data stored in the first memory block from the cache memory and then processes the data. The first memory block is used for storing network data messages and/or packet receiving descriptors.
After the processor obtains the second capacity of the first memory block, the second capacity may be compared to a capacity threshold to determine whether the capacity of the cache memory may store data of a size equal to the second capacity. In the event that the second capacity is greater than or equal to the capacity threshold, it is indicated that the cache memory cannot store data of a size equal to the second capacity. The processor may not perform the step of controlling at least a portion of the data in the cache memory to remain consistent throughout all of the data in the first memory block. In the event that the second capacity is less than the capacity threshold, it is indicated that the cache memory may store data of a size equal to the second capacity. The processor controls at least a portion of the data in the cache memory to remain consistent with all of the data within the first memory block such that the processor is operable to read the data stored in the first memory block from the cache memory.
In the embodiment of the application, the processor controls at least partial cache in the cache processor to always store the data in the first memory block in the memory pool. Optionally, a buffer block in the cache processor for storing data in the first memory block has a mapping relationship with the first memory block.
In one example, assume that the first memory block is a memory block for storing network data messages. And the processor acquires the storage address of the network data message in the packet receiving descriptor after the training memory pool determines that the target packet receiving descriptor exists. The processor converts the storage address into the address of the corresponding cache block and reads the network data message based on the address.
As another example, assume that the first memory block is a memory block for storing a packet reception descriptor. When the processor reaches the descriptor polling time, the target address is converted into the address of the corresponding cache block, and the packet receiving descriptor is read based on the address. Due to the polling mechanism of the processor for the packet reception descriptor in the DPDK system. Thus, the number and frequency of accesses by the processor to the memory blocks used to store the packet reception descriptors is much greater than the number and frequency of accesses to the memory blocks used to store the network data messages. Therefore, compared with the first memory block which is used for storing the network data message, the first memory block is used for storing the packet receiving descriptor, the technical scheme can further reduce the access times of the processor to the memory, thereby reducing the performance cost of the processor in the process of receiving the network data message by the DPDK-based application program.
Optionally, the process of controlling, by the processor, that at least a portion of the data in the cache memory is consistent with all of the data in the first memory block at all times may include: the data of the first memory block is locked to the cache. Data locked to the first memory block of the cache is not replaced out of the cache.
In an embodiment of the present application, the process of locking the data of the first memory block to the cache by the processor may include: and configuring at least one group of lock window registers based on the second capacity and the starting address of the first memory block, so that the at least one group of lock window registers are in an effective state, and the locked cache area controlled by the lock window registers in the effective state at least comprises a second cache block, wherein the second cache block stores data of the first memory block.
As described above, when the DPDK system is initialized, memory blocks for storing various data are allocated to the application program from the memory pool. Therefore, optionally, the start address of the first memory block may be obtained from an initialization configuration code corresponding to the DPDK system.
Similar to the foregoing, optionally, the processor may configure the lock window valid bit, the lock address of the lock window, and the lock window mask of the at least one set of lock window registers based on the second capacity and the starting address of the first memory block such that the at least one set of lock window registers is in an active state and the locked cache area controlled by the lock window registers in the active state includes at least the second cache block. The second cache block stores the data of the first memory block.
In an alternative implementation, the processor is to configure the second number of lock window registers based on the second capacity and a maximum value of the lock window mask. The lock window valid bit, lock address of the lock window, and lock window mask of the second number of group lock window registers are configured such that the second number of group lock window registers are all in a valid state. And the locked cache area of each set of lock window registers in the active state stores a portion of the data of the first memory block such that the locked cache areas of all lock window registers in the active state store all of the data of the first memory block. The memory block from which the data stored in each locked cache area comes is a partial memory block of the first memory block, and the memory block is consistent with the data in the partial memory block all the time. It is also contemplated that each set of lock window registers locks a portion of the memory of the first memory block to the cache memory such that all lock window registers in an active state lock the portion of the memory of the first memory block to the cache memory.
As previously described, the processor may read the network data message for application processing based on the packet receipt descriptor by polling the packet receipt descriptor in the presence of the packet receipt descriptor indicating successful packet receipt. Optionally, the application data processing method further includes a method of a packet receiving stage, as shown in fig. 2, specifically including:
And (3) a package collecting stage:
step 201, when the descriptor polling time arrives, the target address is acquired. The target address is the address of the memory block used to store the packet reception descriptor.
The processor in the DPDK system needs to poll the packet reception descriptor in the memory pool to determine whether the network card receives the network data packet. In the initialization process, the DPDK system stores a plurality of packet receiving descriptors in a memory pool in advance. Optionally, the target address is an address of each packet reception descriptor in a memory block for storing the packet reception descriptor. The processor may obtain addresses of the plurality of packet descriptors in the memory pool for obtaining the plurality of packet descriptors upon arrival of the descriptor poll occasion.
Step 202, detecting whether the cache memory stores data of the target address. If yes, go to step 203; if not, go to step 204.
In the embodiment of the application, the processor can search whether the data of the target address exists in the cache memory based on the target address. Traversing the cache memory, in the case that the data of the target address is found in the cache memory based on the target address, indicating that the cache memory stores the data of the target address. In the case where the data of the target address is not found in the cache memory based on the target address, it is indicated that the cache memory does not store the data of the target address.
In an alternative implementation, if the processor locks the packet-receiving descriptors in the memory pool to the cache memory through lock window registers. The process of the processor detecting whether the cache stores data for the target address may include:
the processor may obtain the lock address determination condition for each set of lock window registers to sequentially determine whether the target address satisfies the lock address determination condition.
And under the condition that the target address meets the lock address judging condition of any group of lock window registers, indicating that the data of the target address is positioned in the locked cache area of the lock window register corresponding to the lock address judging condition. It is also considered that the target address is locked by the lock window register corresponding to the lock address determination condition. The processor determines that the cache stores data for the target address. And under the condition that the target address does not meet any set of lock address judging conditions, indicating that the data of the target address is not positioned in the locked cache area of the lock window register corresponding to the lock address judging conditions. The processor may determine that the cache does not store data for the target address. The lock address judging condition is used for reflecting whether the data of the address is located in the locked cache area.
Optionally, the cache memory includes four sets of lock window registers. The lock address judgment condition of each group of lock window registers is as follows: the address of the data in the buffer area corresponding to the lock window register in the effective state in the memory pool is located in the target address or the storage address.
Illustratively, the lock address determination condition for each set of lock window registers is: the slot_valid & ((addr & slot_mask) = (slot_addr & slot_mask)) is 1. The slocki_valid is a lock window valid bit of an ith group of lock window registers, and addr is a target address or a storage address. The slocki_mask is the lock window mask of the i-th set of lock window registers. The slocki_addr is the lock address of the lock window of the i-th set of lock window registers. i is an integer, and i is more than or equal to 0 and less than or equal to 4.
(addr & slocki_mask) = (slocki_addr & slocki_mask) means that whether the result of performing bitwise and processing on the lock window mask of the i-th group of lock window registers and the target address or the storage address is consistent with the result of performing bitwise and processing on the lock window mask of the i-th group of lock window registers and the lock address of the lock window of the i-th group of lock window registers, that is, whether the address of the data in the cache area corresponding to the lock window registers in the memory pool is located in the target address or the storage address is judged.
If the data in the buffer area corresponding to the lock window register is consistent, the address of the data in the memory pool is located in the target address or the storage address, (addr & slocki_mask) = = (slocki_addr & slocki_mask) = 1.
If the addresses of the data in the cache area corresponding to the lock window register are not consistent, the addresses in the memory pool are not located in the target address or the storage address, (addr & slocki_mask) = = (slocki_addr & slocki_mask) = 0. The clock_valid & ((addr & clock_mask) = (clock_addr & clock_mask)) = (1) indicates that the address of the data in the memory pool in the buffer area corresponding to the lock window register in the valid state is located in the target address or the storage address.
Illustratively, assume that the lock window valid bit of the first set of lock window registers, slack0_valid, is 1. The processor determines that the cache memory stores data of the target address in case that it is determined that the target address addr satisfies the slack0_valid & ((addr & slack0_mask) = (slack0_addr & slack0_mask)) of 1.
Step 203, reading the packet receiving descriptor corresponding to the target address from the cache memory.
In the embodiment of the application, the processor can read the packet receiving descriptor corresponding to the target address from the cache memory under the condition that the data of the target address stored in the cache memory is detected. In an alternative implementation, the processor may determine that the cache stores data for the target address if the target address meets the lock address determination condition for any set of lock window registers.
Optionally, a buffer block in the cache processor for storing data in the memory pool has a mapping relationship with a memory block corresponding to the memory pool. The processor may translate the target address to an address of the cache block based on the mapping relationship to read the packet reception descriptor from the cache memory based on the address of the cache block.
And 204, reading the packet receiving descriptor corresponding to the target address from the memory pool.
In the embodiment of the application, the processor can read the packet receiving descriptor corresponding to the target address from the memory pool under the condition that the data stored in the cache memory with the target address is not detected. In an alternative implementation, the processor may read the packet receipt descriptor of the target address from the memory pool in the event that the target address does not satisfy any of the set of lock address determination conditions.
Step 205, obtaining a storage address of the network data message associated with the packet reception descriptor when the packet reception descriptor indicates successful packet reception. The storage address is an address for storing the network data message into the memory pool after receiving the network data message.
Optionally, the processor may determine whether the packet reception descriptor indicates successful packet reception by determining whether the packet reception identification bit in the packet reception descriptor indicates successful packet reception. And the processor can analyze the packet receiving descriptor to acquire the storage address of the network data message under the condition that the packet receiving descriptor indicates successful packet receiving. The storage address is an address where the network card stores the network data message into the memory pool after receiving the network data message.
Step 206, detecting whether the cache memory stores data of the memory address. If yes, go to step 207; if not, go to step 208.
In the embodiment of the application, the processor can search whether the data of the storage address exists in the cache memory based on the storage address. Traversing the cache memory, in the case that the data of the target address is found in the cache memory based on the target address, indicating that the cache memory stores the data of the target address. In the case where the data of the target address is not found in the cache memory based on the target address, it is indicated that the cache memory does not store the data of the target address.
In an alternative implementation, if the processor locks the packet-receiving descriptor in the memory pool to the cache via a lock window register. The process of the processor determining whether the cache memory stores data of the memory address based on the memory address may include: the processor may obtain the lock address determination condition for each set of lock window registers to sequentially determine whether the storage address satisfies the lock address determination condition.
And under the condition that the storage address meets the lock address judging condition of any group of lock window registers, indicating that the data of the storage address is positioned in the locked cache area of the lock window register corresponding to the lock address judging condition. The memory address may be considered to be locked by the lock window register corresponding to the lock address determination condition. The processor determines that the cache stores data for the memory address. And under the condition that the storage address does not meet any set of lock address judging conditions, indicating that the data of the storage address is not positioned in the locked cache area of the lock window register corresponding to the lock address judging conditions. The processor may determine that the cache memory does not store data for the memory address.
Step 207, the network data message corresponding to the storage address is read from the cache memory for processing by the application program.
In the embodiment of the application, the processor can read the network data message corresponding to the storage address from the cache memory under the condition that the cache memory is detected to store the data of the storage address. In an alternative implementation manner, the processor may determine that the cache memory stores the data of the storage address, and read the network data packet corresponding to the storage address from the cache memory, where the storage address meets the lock address judging condition of any set of lock window registers.
Optionally, a buffer block in the cache processor for storing data in the memory pool has a mapping relationship with a memory block corresponding to the memory pool. The processor may translate the memory address to an address of a cache block based on the mapping to read the network data message from the cache memory based on the address of the cache block.
Step 208, reading the network data message corresponding to the storage address from the memory pool for processing by the application program.
In the embodiment of the application, the processor can read the network data message corresponding to the storage address from the memory pool under the condition that the data of the storage address is not stored in the cache memory. In an alternative implementation, the processor may read the network data packet corresponding to the storage address from the memory pool in a case where the storage address does not satisfy the lock address determination condition of all the set of lock window registers.
It should be noted that, in the case where the processor is a multi-core processor, the cache memory in the embodiment of the present application is a shared cache memory that is accessible to each processor core. For example, the processor is a quad-core processor. The 16MB split shared three-level cache memory is integrated in the chip, and consistency of the multi-core processor and the cache processor accessed by IO in a Direct memory access (Direct MemoryAccess, DMA) mode is maintained through a directory protocol.
In summary, in the application data processing method provided by the embodiment of the present application, the first capacity of the memory pool allocated to the application during initialization is obtained, so that at least part of the data in the cache memory and all the data in the memory pool are controlled to be consistent all the time under the condition that the first capacity is less than the capacity threshold, so that all the data in the memory pool are synchronized into the cache memory. Or when the first capacity is greater than or equal to the capacity threshold and the second capacity of the first memory block in the memory pool is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time so as to synchronize all the data in the first memory block into the cache memory. The memory pool is at least used for storing network data messages and packet receiving descriptors received by the network card. The first memory block is used for storing network data messages and/or packet receiving descriptors. In the technical scheme, the data in the cache memory and at least part of the data in the memory pool are controlled to be consistent, so that in the process of receiving the network data message by the DPDK-based application program, the processor can directly access the cache memory to read the packet receiving descriptor and/or the network data message from the cache memory without accessing the memory to read the packet receiving descriptor and/or the network data message from the memory pool. Compared with the related art, the method reduces the access times of the processor to the memory and reduces the performance cost of the processor.
Referring to fig. 3, another method for processing application data according to an embodiment of the present application is shown. Optionally, the data processing method is applied to a DPDK system of the electronic device. Is executed by a processor of the DPDK system. The DPDK system of the electronic device further includes: network cards and caches (caches). Wherein the processor is a RISC processor supporting a lock cache. The application program data processing method comprises the following steps:
configuration phase:
step 301, obtaining a first capacity of a memory pool allocated for an application program during initialization, where the memory pool is at least used for storing a received network data packet and a packet reception descriptor.
The explanation and implementation of this step may refer to the explanation and implementation of step 101, which is not repeated in the embodiments of the present application.
Step 302, determining whether the first capacity is less than a capacity threshold. If yes, go to step 303; if not, go to step 304.
The explanation and implementation of this step may refer to the explanation and implementation of step 102, which is not repeated in the embodiments of the present application.
Step 303, controlling at least part of the data in the cache memory to be consistent with all the data in the memory pool all the time, so that the processor reads the data stored in the memory pool from the cache memory and processes the data.
The explanation and implementation of this step may refer to the explanation and implementation of step 103, which is not repeated in the embodiments of the present application.
Step 304, obtaining a second capacity of the first memory block in the memory pool. The first memory block is used for storing a packet receiving descriptor.
The explanation and implementation of this step may refer to the explanation and implementation of step 104, which is not repeated in the embodiments of the present application.
Step 305, determining whether the second capacity is less than a capacity threshold. If yes, go to step 306; if not, go to step 309.
In an embodiment of the present application, the processor may compare the second capacity to a capacity threshold to determine whether the second capacity is less than the capacity threshold.
Step 306, controlling at least part of the data in the cache memory to be consistent with all the data in the first memory block, so as to synchronize the packet receiving descriptor in the cache memory.
The explanation and implementation of this step may refer to the explanation and implementation of step 105 described above, and this will not be repeated in the embodiments of the present application.
Step 307, obtaining a third capacity of a second memory block in the memory pool, where the second memory block is used to store the network data packet.
The explanation and implementation of this step may refer to the explanation and implementation of step 104, which is not repeated in the embodiments of the present application.
Step 308, controlling at least part of the data in the cache memory to be consistent with all the data in the second memory block all the time when the third capacity is smaller than the remaining capacity, so as to synchronize the network data message into the cache memory.
Wherein the remaining capacity is a difference between a total capacity of the cache memory and the second capacity.
Step 309, obtaining a third capacity of a second memory block in the memory pool, where the second memory block is used to store the network data packet.
In step 310, if the third capacity is smaller than the capacity threshold, at least part of the data in the cache memory and all the data in the second memory block are controlled to be consistent all the time, so as to synchronize the network data message into the cache memory.
In the embodiment of the application, the processor may compare the size of the third capacity with the size of the remaining capacity to determine whether the cache memory has a margin to store data having a size of the second memory block after storing data having a size of the first memory block. In the case that the third capacity is smaller than the remaining capacity, it indicates that the cache memory may store data having a size of the second memory block after storing data having a size of the first memory block. The processor controls at least a portion of the data in the cache to remain consistent with all of the data in the second memory block.
In the embodiment of the present application, when the first capacity is greater than or equal to the capacity threshold, the processor may first control at least part of the data in the cache memory to be consistent with all the data in the second memory block when the third capacity of the second memory block is less than the capacity threshold. And then, judging whether the second capacity of the first memory is smaller than the residual capacity or not, and controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time under the condition that the second capacity is smaller than the residual capacity.
Specifically, step 304 may be replaced by acquiring the third capacity of the second memory block if the first capacity is greater than or equal to the capacity threshold. Accordingly, step 305 may be replaced with determining whether the third capacity is less than the capacity threshold. Step 306 is replaced by controlling at least a portion of the data in the cache to remain consistent with all of the data in the second memory block throughout to synchronize the network data message into the cache. Step 307 obtains a second capacity of the first memory block in the memory pool. Step 308 controls at least some of the data in the cache to be consistent with all of the data in the first memory block throughout to synchronize the packet reception descriptor into the cache if the second capacity is less than the remaining capacity.
The application data processing method provided by the application is further schematically illustrated by the following examples.
As shown in fig. 4, the capacity of the cache memory is assumed to be 16MB, and the capacity threshold is 15MB. The application program data processing method comprises the following steps: the capacity threshold may be less than or equal to the capacity of the cache memory. By way of example, assume that the capacity of the cache is 16MB. The user configures the capacity threshold to be 15MB, taking into account that the cache needs to hold some space to process other data.
When the capacity threshold is smaller than the capacity of the cache memory, even if the cache memory is controlled to always store data with the capacity threshold, the cache memory can be kept with partial space to process other data, and the compatible processing capacity of the cache memory is ensured.
Configuration phase:
step 401, initializing a processor DPDK system.
Step 402, the processor determines whether the capacity of the memory pool allocated for the application program at the time of initialization is less than 15MB. If yes, go to step 403. If not, go to step 404.
Step 403, setting the lock address of the lock window register as the start address add1 of the memory pool, the lock window mask as the size memsize1 of the memory pool, and the lock window valid bit as 1, i.e. valid, to lock the data of the memory pool to the cache.
Step 404, determining whether the capacity of the memory block of the packet reception descriptor allocated to the application program at the time of initialization is less than 15MB. If yes, go to step 405. If not, go to step 408.
The memory block of the packet receiving descriptor is a memory block used for storing the packet receiving descriptor in a memory pool.
Step 405, setting the lock address of the lock window register as the start address add2 of the packet receiving descriptor memory block, the lock window mask as the size memsize2 of the packet receiving descriptor memory block, and the lock window valid bit as 1, so as to lock the data of the packet receiving descriptor memory block to the cache memory.
Step 406, determining whether the size of the ring buffer is less than 1MB. If yes, go to step 407. If not, ending.
The ring buffer is a memory block in the memory pool for storing network data messages.
Step 407, setting the lock address of the lock window register as the start address add3 of the packet receiving descriptor memory block, the lock window mask as the size memsize3 of the packet receiving descriptor memory block, and the lock window valid bit as 1, so as to lock the data of the ring buffer to the cache memory.
Step 408, determining whether the capacity of the ring buffer is less than 15MB. If yes, go to step 409. If not, ending.
Step 409, setting the lock address of the lock window register as the start address add3 of the packet receiving descriptor memory block, the lock window mask as the size memsize3 of the packet receiving descriptor memory block, and the lock window valid bit as 1, so as to lock the data of the ring buffer to the cache memory.
As shown in fig. 5, the packet-receiving phase:
when the network card receives the network data message, the network data message is stored into the ring buffer, and the address of the network data message in the ring buffer is add5. And the network card updates the packet receiving descriptor corresponding to the network data message in the packet receiving descriptor memory block into a target packet receiving descriptor. The target packet receiving descriptor is used for indicating successful packet receiving and a storage address add5 of the corresponding network data message. The polling set by the processor is surrounded by 5S. And assume that the processor locks the data of the memory pool to the cache.
In step 501, when determining that the duration from the previous descriptor polling time is 5s, the processor determines a new descriptor polling time, and obtains the target address addr4 of the memory block of the packet receiving descriptor.
In step 502, the processor determines whether 1 = (addr 1& memsize 1)) is 1 based on the target address addr4. If yes, go to step 503; if not, go to step 504.
In step 503, the processor reads the packet reception descriptor corresponding to addr4 from the cache memory.
Step 504, the processor reads the packet reception descriptor corresponding to addr4 from the packet reception descriptor memory block.
In step 505, the processor obtains the storage address add5 of the network data packet associated with the packet reception descriptor when the packet reception descriptor indicates successful packet reception.
In step 506, the processor determines whether 1 = (addr 1& memsize 1)) is 1 based on the storage address add5. If yes, go to step 507; if not, go to step 508.
In step 507, the processor reads the network data packet corresponding to add5 from the cache memory, so as to provide for processing by the application program.
Step 508, the processor reads the network data message corresponding to add5 from the high ring buffer for processing by the application program.
In summary, in the application data processing method provided by the embodiment of the present application, the first capacity of the memory pool allocated to the application during initialization is obtained, so that at least part of the data in the cache memory and all the data in the memory pool are controlled to be consistent all the time under the condition that the first capacity is less than the capacity threshold, so that all the data in the memory pool are synchronized into the cache memory. Or when the first capacity is greater than or equal to the capacity threshold and the second capacity of the first memory block in the memory pool is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time so as to synchronize all the data in the first memory block into the cache memory. The memory pool is at least used for storing network data messages and packet receiving descriptors received by the network card. The first memory block is used for storing network data messages and/or packet receiving descriptors. In the technical scheme, the data in the cache memory and at least part of the data in the memory pool are controlled to be consistent, so that in the process of receiving the network data message by the DPDK-based application program, the processor can directly access the cache memory to read the packet receiving descriptor and/or the network data message from the cache memory without accessing the memory to read the packet receiving descriptor and/or the network data message from the memory pool. Compared with the related art, the method reduces the access times of the processor to the memory and reduces the performance cost of the processor.
Further, due to the polling mechanism of the processor for the packet reception descriptor in the DPDK system. Thus, the number and frequency of accesses by the processor to the memory blocks used to store the packet reception descriptors is much greater than the number and frequency of accesses to the memory blocks used to store the network data messages. Therefore, compared with the second memory block used for storing the network data message, the data of the cache memory is preferentially controlled to be consistent with the data of the first memory block used for storing the packet receiving descriptor, so that the access times of the processor to the memory can be reduced to a large extent, and the performance cost of the processor caused by accessing the memory lock is reduced.
Referring to fig. 6, a block diagram of an application data processing apparatus according to an embodiment of the present application is shown. As shown in fig. 6, the application data processing apparatus 600 includes: an acquisition module 601 and a control module 602.
An obtaining module 601, configured to obtain a first capacity of a memory pool allocated to the application during initialization, where the memory pool is at least used to store a received network data packet and a packet reception descriptor;
a control module 602, configured to control, when the first capacity is less than a capacity threshold, at least a portion of data in a cache memory to be consistent with all data in the memory pool all the time, so that a processor reads data stored in the memory pool from the cache memory and performs processing, where the capacity threshold is less than or equal to a capacity of the cache memory;
The control module 602 is further configured to obtain a second capacity of a first memory block in the memory pool when the first capacity is greater than or equal to the capacity threshold, and control at least part of data in the cache memory to be consistent with all data in the first memory block all the time when the second capacity is less than the capacity threshold, so that the processor reads the data stored in the first memory block from the cache memory and processes the data, where the first memory block is used for storing the network data packet and/or the packet reception descriptor.
Optionally, the first memory block is configured to store the packet reception descriptor, and the second memory block in the memory pool is configured to store the network data packet;
controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time under the condition that the second capacity is smaller than the capacity threshold value and the third capacity of the second memory block is larger than or equal to the residual capacity, so that the packet receiving descriptors are synchronized into the cache memory; wherein the remaining capacity is a difference between a total capacity of the cache memory and the second capacity;
And controlling at least part of data in the cache memory to be consistent with all data in the first memory block and the second memory block all the time under the condition that the second capacity is smaller than the capacity threshold value and the third capacity is smaller than the residual capacity so as to synchronize the network data message and the packet receiving descriptor into the cache memory.
Optionally, an acquiring module 601 is configured to acquire a target address when a descriptor polling opportunity is reached; the target address is an address of a memory block for storing the packet reception descriptor;
the application data processing apparatus 600 further includes: a reading module, configured to read, when it is detected that the cache memory stores data of the target address, the packet reception descriptor corresponding to the target address from the cache memory;
the obtaining module 601 is further configured to obtain a storage address of the network data packet associated with the packet reception descriptor when the packet reception descriptor indicates that the packet reception is successful; the storage address is an address for storing the network data message into the memory pool after receiving the network data message;
And the reading module is also used for reading the network data message corresponding to the storage address from the cache memory for processing by the application program under the condition that the data of the storage address stored in the cache memory is detected.
Optionally, the control module 602 is further configured to lock data of the memory pool to the cache; and is also configured to lock data of the first memory block to the cache memory.
Optionally, the control module 602 is further configured to configure at least one set of lock window registers of the cache memory based on the first capacity and a start address of the memory pool, so that the at least one set of lock window registers is in an active state, and a locked cache area of the lock window registers in the active state includes at least a first cache block; the first cache block stores data of the memory pool: and further configured to configure the at least one set of lock window registers based on the second capacity and a start address of the first memory block such that the at least one set of lock window registers is in an active state, and the locked cache area controlled by the lock window registers in the active state includes at least a second cache block; the second cache block stores the data of the first memory block.
Optionally, the reading module is further configured to determine that the cache stores data of the target address if the target address meets a lock address judgment condition of any set of lock window registers; the lock address judging condition is used for reflecting whether the data of the address is positioned in the locked cache area; and the method is also used for determining that the cache memory stores data of the storage address under the condition that the storage address meets the lock address judging condition.
Optionally, the cache memory includes four sets of lock window registers, and the lock address determination condition of each set of lock window registers is: and the address of the data in the buffer area corresponding to the lock window register in the effective state in the memory pool is positioned in the target address or the storage address.
Optionally, the processor is a multi-core processor, the cache memory is a shared cache memory accessible to each processor core, and the memory pool includes a ring buffer for storing network data messages.
In summary, in the application data processing apparatus provided in the embodiment of the present application, by acquiring the first capacity of the memory pool allocated for the application during initialization, at least a portion of data in the cache memory and all data in the memory pool are controlled to be consistent all the time under the condition that the first capacity is less than the capacity threshold, so that all the data in the memory pool are synchronized into the cache memory. Or when the first capacity is greater than or equal to the capacity threshold and the second capacity of the first memory block in the memory pool is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time so as to synchronize all the data in the first memory block into the cache memory. The memory pool is at least used for storing network data messages and packet receiving descriptors received by the network card. The first memory block is used for storing network data messages and/or packet receiving descriptors. In the technical scheme, the data in the cache memory and at least part of the data in the memory pool are controlled to be consistent, so that in the process of receiving the network data message by the DPDK-based application program, the processor can directly access the cache memory to read the packet receiving descriptor and/or the network data message from the cache memory without accessing the memory to read the packet receiving descriptor and/or the network data message from the memory pool. Compared with the related art, the method reduces the access times of the processor to the memory and reduces the performance cost of the processor.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment. For example, the electronic device 700 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 7, an electronic device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the electronic device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the electronic device 700. Examples of such data include instructions for any application or method operating on the electronic device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 706 provides power to the various components of the electronic device 700. Power supply components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 700.
The multimedia component 708 includes a screen between the electronic device 700 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. When the electronic device 700 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the electronic device 700. For example, the sensor assembly 714 may detect an on/off state of the electronic device 700, a relative positioning of the components, such as a display and keypad of the electronic device 700, a change in position of the electronic device 700 or a component of the electronic device 700, the presence or absence of a user's contact with the electronic device 700, an orientation or acceleration/deceleration of the electronic device 700, and a change in temperature of the electronic device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate communication between the electronic device 700 and other devices, either wired or wireless. The electronic device 700 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 700 can be implemented by one or more Application Specific Integrated Circuits (ASICs), digital signal processing circuits (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements.
An embodiment of the present invention provides a readable storage medium, which when executed by a program or a processor of a terminal, enables the terminal to execute the application data processing method of the foregoing embodiment.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
The foregoing has described in detail a current source module and apparatus, an electronic device and a storage medium according to the present application, and specific examples have been provided herein to illustrate the principles and embodiments of the present application, the above examples being provided only to assist in understanding the method of the present application and its core ideas; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (11)

1. A method of application data processing, the method comprising:
acquiring a first capacity of a memory pool allocated for the application program during initialization, wherein the memory pool is at least used for storing received network data messages and packet receiving descriptors;
when the first capacity is smaller than a capacity threshold value, controlling at least part of data in a cache memory to be consistent with all data in the memory pool all the time, so that a processor reads the data stored in the memory pool from the cache memory and then processes the data;
And under the condition that the first capacity is larger than or equal to the capacity threshold, acquiring a second capacity of a first memory block in the memory pool, and under the condition that the second capacity is smaller than the capacity threshold, controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time, so that the processor reads the data stored in the first memory block from the cache memory and then processes the data.
2. The method of claim 1, wherein the first memory block is configured to store the packet reception descriptor, and the second memory block in the memory pool is configured to store the network data message;
controlling at least part of data in the cache memory to be consistent with all data in the first memory block all the time under the condition that the second capacity is smaller than the capacity threshold value and the third capacity of the second memory block is larger than or equal to the residual capacity, so that the packet receiving descriptors are synchronized into the cache memory; wherein the remaining capacity is a difference between a total capacity of the cache memory and the second capacity;
And controlling at least part of data in the cache memory to be consistent with all data in the first memory block and the second memory block all the time under the condition that the second capacity is smaller than the capacity threshold value and the third capacity is smaller than the residual capacity so as to synchronize the network data message and the packet receiving descriptor into the cache memory.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
under the condition that the descriptor polling time is reached, acquiring a target address; the target address is an address of a memory block for storing the packet reception descriptor;
reading the packet receiving descriptor corresponding to the target address from the cache memory under the condition that the data of the target address stored in the cache memory is detected;
acquiring a storage address of the network data message associated with the packet receiving descriptor under the condition that the packet receiving descriptor indicates successful packet receiving; the storage address is an address for storing the network data message into the memory pool after receiving the network data message;
And under the condition that the data of the storage address stored in the cache memory is detected, reading the network data message corresponding to the storage address from the cache memory for processing by the application program.
4. The method of claim 1, wherein at least a portion of the data in the control cache is consistent with all of the data in the memory pool at all times, comprising:
locking data of the memory pool to the cache;
the controlling at least part of the data in the cache memory to always keep consistent with all the data in the first memory block includes:
locking the data of the first memory block to the cache memory.
5. The method of claim 4, wherein locking the data of the memory pool to the cache memory comprises:
configuring at least one set of lock window registers of the cache memory based on the first capacity and a starting address of the memory pool, such that the at least one set of lock window registers are in an active state, and a locked cache area of the lock window registers in the active state includes at least a first cache block; the first cache block stores data of the memory pool;
The locking the data of the first memory block to the cache memory includes:
configuring the at least one set of lock window registers based on the second capacity and the start address of the first memory block, so that the at least one set of lock window registers are in an active state, and the locked cache area controlled by the lock window registers in the active state at least comprises a second cache block; the second cache block stores the data of the first memory block.
6. The method of claim 5, wherein the detecting that the cache stores data for the target address comprises:
determining that the cache memory stores data of the target address under the condition that the target address meets the lock address judging condition of any group of lock window registers; the lock address judging condition is used for reflecting whether the data of the address is positioned in the locked cache area;
the detecting that the cache memory stores the data of the storage address includes:
and determining that the cache memory stores data of the storage address under the condition that the storage address meets the lock address judging condition.
7. The method of claim 6, wherein the cache memory includes four sets of lock window registers, and wherein the lock address determination condition for each set of lock window registers is: and the address of the data in the buffer area corresponding to the lock window register in the effective state in the memory pool is positioned in the target address or the storage address.
8. The method of claim 1, wherein the processor is a multi-core processor, the cache memory is a shared cache memory accessible to each processor core, and the memory pool comprises a ring buffer for storing the network data messages.
9. An application data processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring a first capacity of a memory pool allocated for the application program during initialization, wherein the memory pool is at least used for storing received network data messages and packet receiving descriptors;
the control module is used for controlling at least partial data in the cache memory to be consistent with all data in the memory pool all the time under the condition that the first capacity is smaller than a capacity threshold value, so that the processor reads the data stored in the memory pool from the cache memory and then processes the data;
The control module is further configured to obtain a second capacity of the first memory block in the memory pool when the first capacity is greater than or equal to the capacity threshold, and control at least part of data in the cache memory to be consistent with all data in the first memory block all the time when the second capacity is less than the capacity threshold, so that the processor reads the data stored in the first memory block from the cache memory and then processes the data.
10. An electronic device comprising a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to perform the method of any of claims 1-8 by one or more processors.
11. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the processor to perform the method of any one of claims 1 to 8.
CN202310914139.6A 2023-07-24 2023-07-24 Application program data processing method and device and electronic equipment Pending CN117076346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310914139.6A CN117076346A (en) 2023-07-24 2023-07-24 Application program data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310914139.6A CN117076346A (en) 2023-07-24 2023-07-24 Application program data processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117076346A true CN117076346A (en) 2023-11-17

Family

ID=88705244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310914139.6A Pending CN117076346A (en) 2023-07-24 2023-07-24 Application program data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117076346A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117573043A (en) * 2024-01-17 2024-02-20 济南浪潮数据技术有限公司 Transmission method, device, system, equipment and medium for distributed storage data

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040148464A1 (en) * 2003-01-21 2004-07-29 Jang Ho-Rang Cache memory device and method of controlling the cache memory device
CN101763437A (en) * 2010-02-10 2010-06-30 成都市华为赛门铁克科技有限公司 Method and device for realizing high-speed buffer storage
CN104794069A (en) * 2015-04-01 2015-07-22 北京创毅视讯科技有限公司 User state allocation method and system for cache in CPU
CN105677581A (en) * 2016-01-05 2016-06-15 上海斐讯数据通信技术有限公司 Internal storage access device and method
WO2021019652A1 (en) * 2019-07-29 2021-02-04 日本電信電話株式会社 Cache tuning device, cache tuning method and cache tuning program
US20210034422A1 (en) * 2019-07-30 2021-02-04 EMC IP Holding Company LLC Dynamic memory manager
CN112559389A (en) * 2019-09-25 2021-03-26 阿里巴巴集团控股有限公司 Storage control device, processing device, computer system, and storage control method
CN113704150A (en) * 2021-08-13 2021-11-26 苏州浪潮智能科技有限公司 DMA data cache consistency method, device and system in user mode
CN115174673A (en) * 2022-06-29 2022-10-11 北京奕斯伟计算技术股份有限公司 Data processing device with low-delay processor, data processing method and equipment
CN115905046A (en) * 2022-12-23 2023-04-04 科东(广州)软件科技有限公司 Network card drive data packet processing method and device, electronic equipment and storage medium
CN115982068A (en) * 2022-12-30 2023-04-18 苏州盛科通信股份有限公司 Data processing method and related device
CN116132532A (en) * 2023-02-13 2023-05-16 苏州盛科通信股份有限公司 Message processing method and device and electronic equipment
CN116208574A (en) * 2023-03-13 2023-06-02 苏州盛科通信股份有限公司 Message processing method, device, electronic equipment and computer readable storage medium
CN116450328A (en) * 2022-01-06 2023-07-18 腾讯科技(深圳)有限公司 Memory allocation method, memory allocation device, computer equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040148464A1 (en) * 2003-01-21 2004-07-29 Jang Ho-Rang Cache memory device and method of controlling the cache memory device
CN101763437A (en) * 2010-02-10 2010-06-30 成都市华为赛门铁克科技有限公司 Method and device for realizing high-speed buffer storage
CN104794069A (en) * 2015-04-01 2015-07-22 北京创毅视讯科技有限公司 User state allocation method and system for cache in CPU
CN105677581A (en) * 2016-01-05 2016-06-15 上海斐讯数据通信技术有限公司 Internal storage access device and method
WO2021019652A1 (en) * 2019-07-29 2021-02-04 日本電信電話株式会社 Cache tuning device, cache tuning method and cache tuning program
US20210034422A1 (en) * 2019-07-30 2021-02-04 EMC IP Holding Company LLC Dynamic memory manager
CN112559389A (en) * 2019-09-25 2021-03-26 阿里巴巴集团控股有限公司 Storage control device, processing device, computer system, and storage control method
CN113704150A (en) * 2021-08-13 2021-11-26 苏州浪潮智能科技有限公司 DMA data cache consistency method, device and system in user mode
CN116450328A (en) * 2022-01-06 2023-07-18 腾讯科技(深圳)有限公司 Memory allocation method, memory allocation device, computer equipment and storage medium
CN115174673A (en) * 2022-06-29 2022-10-11 北京奕斯伟计算技术股份有限公司 Data processing device with low-delay processor, data processing method and equipment
CN115905046A (en) * 2022-12-23 2023-04-04 科东(广州)软件科技有限公司 Network card drive data packet processing method and device, electronic equipment and storage medium
CN115982068A (en) * 2022-12-30 2023-04-18 苏州盛科通信股份有限公司 Data processing method and related device
CN116132532A (en) * 2023-02-13 2023-05-16 苏州盛科通信股份有限公司 Message processing method and device and electronic equipment
CN116208574A (en) * 2023-03-13 2023-06-02 苏州盛科通信股份有限公司 Message processing method, device, electronic equipment and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALLAS NE等: "ASIC Implementation and Optimization of 16 Bit SDRAM Memory Controller", 《IEEE》, 1 January 2020 (2020-01-01) *
杨惠;陈一骄;李韬;李世星;戴幻尧;: "面向多核网络分组处理系统的线程亲和缓冲区管理机制", 国防科技大学学报, no. 05, 28 October 2016 (2016-10-28) *
郭晓伟;: "内存加速的利器――高速缓冲存储器", 科技资讯, no. 12, 23 April 2006 (2006-04-23) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117573043A (en) * 2024-01-17 2024-02-20 济南浪潮数据技术有限公司 Transmission method, device, system, equipment and medium for distributed storage data

Similar Documents

Publication Publication Date Title
CN107750466B (en) Pairing nearby devices using synchronized alert signals
KR101777984B1 (en) Method and device for displaying wifi list
US20190228217A1 (en) Method, apparatus and device for waking up voice interaction function based on gesture, and computer readable medium
US7730236B2 (en) Cellular phone and portable storage device using the same
US11449242B2 (en) Shared storage space access method, device and system and storage medium
US10609276B2 (en) Electronic device and method for controlling operation of camera-related application based on memory status of the electronic device thereof
US20230292269A1 (en) Method and apparatus for determining offset indication, and method and apparatus for determining offset
CN111966410B (en) Start-up processing method and device, electronic equipment and storage medium
CN109947671B (en) Address translation method and device, electronic equipment and storage medium
CN107608714B (en) Byte alignment method, device and computer readable storage medium
CN105830421B (en) Electronic device and operation method thereof
CN117076346A (en) Application program data processing method and device and electronic equipment
CN106354657B (en) Register access method, device and system
CN117335830A (en) Wireless communication circuit, bluetooth communication switching method and electronic equipment
CN111221544B (en) Management method and terminal for pre-installed application software
CN113746998A (en) Image processing method, device, equipment and storage medium
CN110896567A (en) Data transmission method and device based on WIFI
US9392540B2 (en) Method for reducing power consumption and electronic device thereof
CN116775345B (en) Data transmission method and electronic equipment
WO2022143048A1 (en) Dialogue task management method and apparatus, and electronic device
CN112637932B (en) Method, device and system for accessing 5G network
CN117440060B (en) Communication conversion device, electronic equipment, system and method
CN113392055B (en) File transmission method, file transmission device and storage medium
CN116414238A (en) Handwriting pen control method, handwriting pen control device and storage medium
CN117215978A (en) Data transmission method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination