CN104965798A - Data processing method, related device and data processing system - Google Patents

Data processing method, related device and data processing system Download PDF

Info

Publication number
CN104965798A
CN104965798A CN201510315790.7A CN201510315790A CN104965798A CN 104965798 A CN104965798 A CN 104965798A CN 201510315790 A CN201510315790 A CN 201510315790A CN 104965798 A CN104965798 A CN 104965798A
Authority
CN
China
Prior art keywords
described target
target data
data
configuration information
start address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510315790.7A
Other languages
Chinese (zh)
Other versions
CN104965798B (en
Inventor
袁张慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Shanghai Huawei Technologies Co Ltd
Original Assignee
Shanghai Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huawei Technologies Co Ltd filed Critical Shanghai Huawei Technologies Co Ltd
Priority to CN201510315790.7A priority Critical patent/CN104965798B/en
Publication of CN104965798A publication Critical patent/CN104965798A/en
Application granted granted Critical
Publication of CN104965798B publication Critical patent/CN104965798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)
  • Bus Control (AREA)

Abstract

The embodiment of the present invention discloses a data processing method, a related device and a data processing system. The data processing method comprises: determining a target length of target data to be processed in a memory; determining a target cache region in the memory, wherein the target cache region is in an idle state in which no data is written currently, a length of data capable of being stored in the target cache region is greater than or equal to the target length of the target data and a length of a path between the target cache region and a CPU (central processing unit) is smaller than a length of a path between the target data and the CPU; and sending first configuration information to a DMA (direct memory access) controller, wherein the first configuration information is used for triggering the DMA controller to transmit the target data to the target cache region. Data in the memory can be effectively processed; difficulty in scheduling the memory when the data is processed is greatly reduced and the process of scheduling the memory is simplified.

Description

A kind of data processing method, relevant device and system
Technical field
The present invention relates to the communications field, in particular a kind of data processing method, relevant device and system.
Background technology
The computer system of prior art can be shown in Figure 1, central processor CPU can carry out the action of read/write to the data being stored in internal memory, CPU is connected by internal bus with internal memory, concrete, internal memory of the prior art has divided multiple level, level is larger, memory size is larger, but CPU access efficiency is lower, and level is larger, the path of distance CPU is larger, and in order to promote the efficiency of data processing, then CPU is minimum to level and the memory layers level that access efficiency is the highest carries out time division multiplex;
Internal memory is divided into 3 internal memory levels shown in Fig. 1, wherein, the internal memory of internal memory level 1 is minimum, but CPU access is most effective, the internal memory of internal memory level 3 is maximum, but the efficiency of CPU access is minimum, such as be positioned at internal memory level 3 and need the frequent data processed, multiple time period is divided in internal memory level 1, CPU needs internal memory level 1 to be used in section T as the internal memory of these data at a fixed time, and all the other time periods in internal memory level 1 beyond time period T then need to use as the internal memory of other data;
The time-multiplexed mode of internal memory level shown in prior art of employing can cause the difficulty internally depositing into row scheduling, and the coupling that internal memory uses, and reduces the efficiency of data processing, adds the burden of CPU.
Summary of the invention
Embodiments provide a kind of data processing method, relevant device and system, the complexity to scheduling memory can be effectively reduced;
Embodiment of the present invention first aspect provides a kind of data processing method, comprising:
Determine the target length of target data pending in internal memory;
Determine the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the described target length of described target data, and the path between described target cache district and central processor CPU is less than the path between described target data and CPU;
First configuration information is sent to direct memory access dma controller, and described target data is sent to described target cache district for triggering described dma controller by described first configuration information.
In conjunction with embodiment of the present invention first aspect, in the first implementation of embodiment of the present invention first aspect,
Described first configuration information is sent to direct memory access dma controller before, described method also comprises:
Determine the source start address of described target data;
Determine the object start address in described target cache district;
Generate described first configuration information including described target length, described source start address and described object start address, and described target data reads from described source start address for triggering described dma controller by described first configuration information, and the described target data read also is write described object start address for triggering described dma controller by described first configuration information, is sent to described target cache district to make described target data.
In conjunction with the first implementation of embodiment of the present invention first aspect, in the second implementation of embodiment of the present invention first aspect,
Described first configuration information is sent to direct memory access dma controller after, described method also comprises:
The described target data being stored in described target cache district is processed;
Determine whether the described target data after processing changes;
If, then the second configuration information is sent to described dma controller, the described target data of having changed reads from described object start address for triggering described dma controller by described second configuration information, and the described target data of having changed also is write described source start address for triggering described dma controller by described second configuration information;
If not, then the described target data of not changing is discharged.
Embodiment of the present invention second aspect provides a kind of data processing method, comprising:
Receive the first configuration information;
According to described first configuration information target data pending in internal memory is sent to the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and central processor CPU is less than the path between described target data and CPU.
In conjunction with embodiment of the present invention second aspect, in the first implementation of embodiment of the present invention second aspect,
After described reception first configuration information, described method also comprises:
Read described first configuration information to obtain described target length, the source start address of described target data and the object start address in described target cache district;
According to described first configuration information, by target data pending in internal memory, the target cache district be sent in internal memory comprises:
Described target data is read from described source start address;
The described target data read is write described object start address, is sent to described target cache district to make described target data.
In conjunction with the first implementation of embodiment of the present invention second aspect, in the second implementation of embodiment of the present invention second aspect,
After described target cache district target data pending in internal memory be sent to according to described first configuration information in internal memory, described method also comprises:
Receive the second configuration information;
Determine that the described target data being stored in described target cache district is changed according to described second configuration information;
The described target data of having changed is read from described object start address;
The described target data of having changed is write described source start address.
The embodiment of the present invention third aspect provides a kind of central processor CPU, comprising:
First determining unit, for determining the target length of target data pending in internal memory;
Second determining unit, for determining the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the described target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU;
First transmitting element, for the first configuration information is sent to direct memory access dma controller, described target data is sent to described target cache district for triggering described dma controller by described first configuration information.
In conjunction with the embodiment of the present invention third aspect, in the first implementation of the embodiment of the present invention third aspect, also comprise:
3rd determining unit, for determining the source start address of described target data;
4th determining unit, for determining the object start address in described target cache district;
Generation unit, for generating described first configuration information including described target length, described source start address and described object start address, and described target data reads from described source start address for triggering described dma controller by described first configuration information, and the described target data read also is write described object start address for triggering described dma controller by described first configuration information, is sent to described target cache district to make described target data.
In conjunction with the first implementation of the embodiment of the present invention third aspect, in the second implementation of the embodiment of the present invention third aspect, also comprise:
Processing unit, for processing the described target data being stored in described target cache district;
5th determining unit, for determining whether the described target data after processing changes;
Second transmitting element, if change for described target data, then the second configuration information is sent to described dma controller, the described target data of having changed reads from described object start address for triggering described dma controller by described second configuration information, and the described target data of having changed also is write described source start address for triggering described dma controller by described second configuration information;
6th determining unit, if do not change for described target data, then discharges the described target data of not changing.
Embodiment of the present invention fourth aspect provides a kind of direct memory access dma controller, comprising:
First receiving element, for receiving the first configuration information;
Delivery unit, for according to described first configuration information target data pending in internal memory being sent to the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and central processor CPU is less than the path between described target data and CPU.
In conjunction with embodiment of the present invention fourth aspect, in the first implementation of embodiment of the present invention fourth aspect, also comprise:
First reading unit, for reading described first configuration information to obtain described target length, the source start address of described target data and the object start address in described target cache district;
Described delivery unit comprises:
Read module, for reading described target data from described source start address;
Writing module, for the described target data read is write described object start address, is sent to described target cache district to make described target data.
In conjunction with the first implementation of embodiment of the present invention fourth aspect, in the second implementation of embodiment of the present invention fourth aspect, also comprise:
Second receiving element, for receiving the second configuration information;
According to described second configuration information, 7th determining unit, for determining that the described target data being stored in described target cache district is changed;
Reading unit, reads from described object start address for the described target data of will change;
Writing unit, for writing described source start address by the described target data of having changed.
The embodiment of the present invention the 5th aspect provides a kind of computer system, comprise as the embodiment of the present invention third aspect to the embodiment of the present invention third aspect any one of the second implementation as described in CPU, as embodiment of the present invention fourth aspect to embodiment of the present invention fourth aspect any one of the second implementation as described in dma controller and internal memory;
Wherein, described internal memory is connected with described CPU and described dma controller by internal bus.
The embodiment of the invention discloses a kind of data processing method, relevant device and system, described data processing method comprises the target length determining target data pending in internal memory, determine the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the described target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU, first configuration information is sent to direct memory access dma controller, described target data is sent to described target cache district for triggering described dma controller by described first configuration information, can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Accompanying drawing explanation
The structural representation of the computer system that Fig. 1 provides for prior art;
Fig. 2 is a kind of structural representation of the computer system in the embodiment of the present invention;
Fig. 3 is a kind of flow chart of steps of data processing method provided by the present invention;
Fig. 4 is the another kind of flow chart of steps of data processing method provided by the present invention;
Fig. 5 is the another kind of flow chart of steps of data processing method provided by the present invention;
Fig. 6 is the another kind of flow chart of steps of data processing method provided by the present invention;
Fig. 7 is a kind of structural schematic block diagram of CPU provided by the present invention;
Fig. 8 is the another kind of structural schematic block diagram of CPU provided by the present invention;
Fig. 9 is a kind of structural schematic block diagram of dma controller provided by the present invention;
Figure 10 is the another kind of structural schematic block diagram of dma controller provided by the present invention;
Figure 11 is a kind of structural schematic block diagram of computer system provided by the present invention.
Embodiment
Data processing method for a better understanding of the present invention shown in embodiment, is first described in detail to the structure of the computer system that can realize the data processing method shown in the present embodiment:
Shown in Figure 2, computer system shown in the present embodiment comprises central processor CPU 201, internal memory 202 and direct memory access dma controller 203, and described CPU201 is all connected by internal bus with described dma controller 203 with described internal memory 202, and described dma controller 203 can realize data under the control of described CPU201 is transmitted between internal memory 202 and dma controller 203 and carries out, that is, dma controller 203 can by data write/read internal memory 202, and require no the computing of CPU201, thus computer system processor speed is accelerated, effectively improve the usefulness of data transmission,
And described internal memory 202 has divided multiple level, level is larger, and memory size is larger, but CPU201 access efficiency is lower, and level is larger, and the path of distance CPU is larger.
Specific to the present embodiment, described data processing method is shown in Figure 3;
301, the target length of target data pending in internal memory is determined;
In the present embodiment, CPU determines that the needs be stored in internal memory carry out the target length of the described target data processed.
In the present embodiment, described target data can be the data that needs that described CPU determines frequently carry out processing, and can not ensure the efficient process to this target data with the internal memory level making described target data current be positioned at.
302, the target cache district in internal memory is determined;
In the present embodiment, the internal memory level that the internal memory level that the determined described target cache district of CPU is positioned at is positioned at lower than described target data, the path namely between described target cache district and CPU is less than the path between described target data and CPU;
Described target cache district is the current idle condition not writing data;
And the storable data length in described target cache district is more than or equal to the described target length of described target data;
Namely in the present embodiment, described CPU opens up the buffer area of one or more snippets free time in the internal memory level that CPU access efficiency is high, and the buffer area storable data length being more than or equal to the described target length of described target data is defined as described target cache district, thus data can be processed in described target cache district efficiently.
Shown in Fig. 2, internal memory shown in the present embodiment is divided into 3 internal memory levels (internal memory level 1, internal memory level 2 and internal memory level 3), need it is clear that, the present embodiment does not limit the concrete number that described internal memory level divides, and the present embodiment is only illustrated for 3.
Need it is clear that, the present embodiment does not limit the internal memory level that concrete the be positioned at internal memory level in described target cache district and described target data are positioned at, as long as the internal memory level that the internal memory level that described target cache district is positioned at is positioned at lower than described target data, such as, described target cache district is positioned at the internal memory level 1 shown in Fig. 2, and described target data is positioned at described internal memory level 2 or 3.
303, the first configuration information is sent to direct memory access dma controller.
CPU generates described first configuration information for triggering described dma controller described target data being sent to described target cache district;
Described first configuration information is sent to described dma controller by CPU, according to described first configuration information, described target data is sent to described target cache district to enable described dma controller.
The specifying information that comprise of the present embodiment to described first configuration information does not limit, as long as described target data can be sent to described target cache district according to described first configuration information by described dma controller.
Data processing method shown in the present embodiment can be applicable in baseband processor, solve computer system when there is no hardware cache (Cache), also can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Certainly, the application of the data processing method shown in the present embodiment in baseband processor is only a kind of example, does not limit, and also can be applicable to various communication network, the NodeB of BTS, UMTS network of such as GSM network and the eNodeB etc. of LTE network.
Below in conjunction with embody rule scene, the described data processing method shown in the present embodiment is described in further detail:
As shown in Figure 4, described data processing method comprises:
401, the target length of target data pending in internal memory is determined;
Specifically please step 301 as shown in Figure 3, does not specifically repeat in the present embodiment;
The present embodiment is positioned at the internal memory level 3 shown in Fig. 2 for described target data and is illustrated;
402, the source start address of described target data is determined;
CPU determines the source start address of described target data;
403, the target cache district in internal memory is determined;
Specifically ask detailed step 302 as shown in Figure 3, do not repeat in the present embodiment.
404, the object start address in described target cache district is determined;
The present embodiment is positioned at the internal memory level 1 shown in Fig. 2 for described target cache district and is illustrated;
405, described first configuration information including described target length, described source start address and described object start address is generated;
CPU generates described first configuration information according to fixed described target length, described source start address and described object start address.
406, described first configuration information is sent to described dma controller;
Wherein, described target data reads from described source start address for triggering described dma controller by described first configuration information, and the described target data read also is write described object start address for triggering described dma controller by described first configuration information, is sent to described target cache district to make described target data.
Namely in the application scenarios of the present embodiment, described first configuration information that described CPU generates can make described dma controller read the described target data being arranged in described internal memory level 3, and by described target cache district fixed in this target data write memory level 1, the described target data in the described target cache district leaving in described internal memory level 1 with the access of efficient access efficiency to enable described CPU.
The false code realizing the process shown in step 401 to step 406 can be as follows:
CacheAddr=SoftCacheAccess
407, the described target data being stored in described target cache district is processed;
CPU can process the described target data being stored in described target cache district.
The false code realizing the process shown in step 407 can be as follows:
Process(CacheAddr)
408, determine whether the described target data after processing changes, if so, then carry out step 409, if not, then carry out step 410;
CPU is after processing the described target data in described target cache district, and described CPU determines whether described target data is changed by described CPU;
Such as described CPU determines whether to write described target data, if so, then illustrates that described target data is changed, if not, then illustrates that described target data is not changed.
The change mode of certain the present embodiment to described target data does not limit.
409, the second configuration information is sent to described dma controller;
CPU is after determining that described target data is changed, then described CPU generates described second configuration information;
The described target data of having changed reads from described object start address for triggering described dma controller by described second configuration information, and the described target data of having changed also is write described source start address for triggering described dma controller by described second configuration information;
Namely the described target data of having changed is written back to internal memory level 3 according to described second configuration information by described dma controller again.
False code for realizing this step is as follows:
SoftCacheWriteBack(CacheAddr,dirtyFlag)
410, the described target data of not changing is discharged.
Namely the described target data that described CPU directly controls to be arranged in memory layers level 1 discharges.
Data processing method shown in the present embodiment can be applicable in baseband processor, solve computer system when there is no hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
From the angle of CPU, the data processing method how realizing simplifying scheduling memory is described in detail above, with the scheduling of dma controller, the data processing method how realizing simplifying scheduling memory has been described in detail below:
Wherein, the concrete structure for the computer system realizing the described data processing method shown in Fig. 5 can be shown in Figure 2, please participate in shown in above-described embodiment, specifically do not repeat in the present embodiment the explanation of Fig. 2.
Shown in Figure 5 described data processing method to be described in detail:
501, the first configuration information is received;
Described first configuration information by described CPU generate and for triggering described dma controller described target data is sent to the configuration information in described target cache district;
Wherein, the concrete generative process of described first configuration information please refer to shown in above-described embodiment, does not specifically repeat in the present embodiment.
502, according to described first configuration information target data pending in internal memory is sent to the target cache district in internal memory;
Target data pending in internal memory to be sent to the target cache district in internal memory by described dma controller according to described first configuration information.
Wherein, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU.
Namely, in the present embodiment, the target data be stored in the low internal memory level of CPU access efficiency can be sent in the high internal memory level of described CPU access efficiency by described dma controller;
Need it is clear that, the present embodiment does not limit the internal memory level that concrete the be positioned at internal memory level in described target cache district and described target data are positioned at, as long as the internal memory level that the internal memory level that described target cache district is positioned at is positioned at lower than described target data, such as, the described target data being positioned at described internal memory level 2 or 3 is sent to the described target cache district of internal memory level 1, effectively to promote the treatment effeciency of target data by described dma controller.
Data processing method shown in the present embodiment can be applicable in baseband processor, solve computer system when there is no hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Certainly, the application of the data processing method shown in the present embodiment in baseband processor is only a kind of example, does not limit, and also can be applicable to various communication network, the NodeB of BTS, UMTS network of such as GSM network and the eNodeB etc. of LTE network.
Below in conjunction with embody rule scene, the described data processing method shown in the present embodiment is described in further detail:
As shown in Figure 6, described data processing method comprises:
601, the first configuration information is received;
Specifically please step 501 as shown in Figure 5, does not specifically repeat in the present embodiment;
602, described first configuration information is read to obtain described target length, the source start address of described target data and the object start address in described target cache district;
Described dma controller reads described first configuration information to obtain described target length, the source start address of described target data and the object start address in described target cache district.
Wherein, how described CPU specifically generates please referring to shown in above-described embodiment of described first configuration information including described target length, the source start address of described target data and the object start address in described target cache district, does not specifically repeat in the present embodiment.
The present embodiment is positioned at the internal memory level 1 shown in Fig. 2 for described target cache district and is illustrated, then described dma controller can determine the described object start address in the described target cache district being arranged in internal memory level 1 according to described first configuration information;
The present embodiment is positioned at the internal memory level 3 shown in Fig. 2 for described target data and is illustrated, then described dma controller can determine the described target length of the described target data being arranged in internal memory level 3 and the source start address of described target data according to described first configuration information.
603, described target data is read from described source start address;
Described target data can read from described source start address by described dma controller.
604, the described target data read is write described object start address;
The described target data read can be write described object start address by described dma controller, is sent to described target cache district to make described target data.
Namely in the application scenarios of the present embodiment, described dma controller can read according to described first configuration information the described target data being arranged in described internal memory level 3, and by described target cache district fixed in this target data write memory level 1, the described target data in the described target cache district leaving in described internal memory level 1 with the access of efficient access efficiency to enable described CPU.
The false code realizing the process shown in step 601 to step 604 can be as follows:
CacheAddr=SoftCacheAccess
605, the second configuration information is received;
If described CPU processes the described target data being stored in described target cache district, and described CPU determines whether described target data is changed by described CPU, then described CPU correspondence generates described second configuration information, wherein, the concrete generative process of described second configuration information please refer to shown in above-described embodiment, does not specifically repeat in the present embodiment.
606, determine that the described target data being stored in described target cache district is changed according to described second configuration information;
In the present embodiment, if described dma controller receives described second configuration information can determine that the described target data being positioned at described target cache district is changed by described CPU.
607, the described target data of having changed is read from described object start address;
The described target data of having changed reads from described object start address by described dma controller.
608, the described target data of having changed is write described source start address.
The described target data of having changed is write described source start address by described dma controller;
Namely, in the present embodiment, the described target data be positioned in the target cache district of described internal memory level 1, after receiving described second configuration information, reads, and described target data is written back to described internal memory level 3 by described dma controller.
Data processing method shown in the present embodiment can be applicable in baseband processor, solve computer system when there is no hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Below in conjunction with shown in Fig. 7, the structure of the CPU that can realize above-mentioned data processing method is described in detail;
Described CPU comprises:
First determining unit 701, for determining the target length of target data pending in internal memory;
Second determining unit 702, for determining the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the described target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU;
First transmitting element 703, for the first configuration information is sent to direct memory access dma controller, described target data is sent to described target cache district for triggering described dma controller by described first configuration information.
CPU shown in the present embodiment can be applicable in baseband processor, solve computer system when there is no hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Certainly, the application of the CPU shown in the present embodiment in baseband processor is only a kind of example, does not limit, and also can be applicable to various communication network, the NodeB of BTS, UMTS network of such as GSM network and the eNodeB etc. of LTE network.
Below in conjunction with shown in Fig. 8, the concrete structure of described CPU is described in further detail:
Described CPU comprises:
First determining unit 801, for determining the target length of target data pending in internal memory;
3rd determining unit 802, for determining the source start address of described target data;
Second determining unit 803, for determining the target cache district in internal memory;
4th determining unit 804, for determining the object start address in described target cache district;
Generation unit 805, for generating described first configuration information including described target length, described source start address and described object start address, and described target data reads from described source start address for triggering described dma controller by described first configuration information, and the described target data read also is write described object start address for triggering described dma controller by described first configuration information, is sent to described target cache district to make described target data.
First transmitting element 806, for the first configuration information is sent to direct memory access dma controller, described target data is sent to described target cache district for triggering described dma controller by described first configuration information;
Processing unit 807, for processing the described target data being stored in described target cache district;
5th determining unit 808, for determining whether the described target data after processing changes;
Second transmitting element 809, if change for described target data, then the second configuration information is sent to described dma controller, the described target data of having changed reads from described object start address for triggering described dma controller by described second configuration information, and the described target data of having changed also is write described source start address for triggering described dma controller by described second configuration information;
6th determining unit 810, if do not change for described target data, then discharges the described target data of not changing.
Described CPU shown in the present embodiment solves computer system when not having hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Below in conjunction with shown in Fig. 9, the concrete structure of the dma controller that can realize above-mentioned data processing method is described in detail:
Described dma controller comprises:
First receiving element 901, for receiving the first configuration information;
Delivery unit 902, for according to described first configuration information target data pending in internal memory being sent to the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU.
Dma controller shown in the present embodiment can be applicable in baseband processor, solve computer system when there is no hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
Certainly, the application of the dma controller shown in the present embodiment in baseband processor is only a kind of example, does not limit, and also can be applicable to various communication network, the NodeB of BTS, UMTS network of such as GSM network and the eNodeB etc. of LTE network.
Below in conjunction with shown in Figure 10, the concrete structure of described dma controller is described in further detail:
Described dma controller comprises:
First receiving element 1001, for receiving the first configuration information;
First reading unit 1002, for reading described first configuration information to obtain described target length, the source start address of described target data and the object start address in described target cache district;
Delivery unit 1003, for according to described first configuration information target data pending in internal memory being sent to the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU;
Concrete, described delivery unit 1003 comprises:
Read module 10031, for reading described target data from described source start address;
Writing module 10032, for the described target data read is write described object start address, is sent to described target cache district to make described target data;
Second receiving element 1004, for receiving the second configuration information;
According to described second configuration information, 7th determining unit 1005, for determining that the described target data being stored in described target cache district is changed;
Reading unit 1006, reads from described object start address for the described target data of will change;
Writing unit 1007, for writing described source start address by the described target data of having changed.
Described dma controller shown in the present embodiment solves computer system when not having hardware cache (Cache), can process the data in internal memory efficiently, greatly reduce the difficulty to scheduling memory when processing data, simplify the process of scheduling memory, simplify the design difficulty of CPU in data handling procedure, reduce the occupancy to internal memory.
As shown in figure 11, the embodiment of the present invention additionally provides a kind of computer system, and described computer system comprises:
The concrete structure of CPU1101, described CPU1101 refers to shown in Fig. 7 to Fig. 8, does not specifically repeat in the present embodiment;
Dma controller 1102, the concrete structure of described dma controller 1102 refers to shown in Fig. 9 to Figure 10, does not specifically repeat in the present embodiment;
Internal memory 1103, described internal memory 1103 is connected with described CPU1101 and described dma controller 1102 by internal bus.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that, disclosed system, apparatus and method, can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-OnlyMemory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit; Although with reference to previous embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein portion of techniques feature; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (13)

1. a data processing method, is characterized in that, comprising:
Determine the target length of target data pending in internal memory;
Determine the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the described target length of described target data, and the path between described target cache district and central processor CPU is less than the path between described target data and CPU;
First configuration information is sent to direct memory access dma controller, and described target data is sent to described target cache district for triggering described dma controller by described first configuration information.
2. data processing method according to claim 1, is characterized in that, described first configuration information is sent to direct memory access dma controller before, described method also comprises:
Determine the source start address of described target data;
Determine the object start address in described target cache district;
Generate described first configuration information including described target length, described source start address and described object start address, and described target data reads from described source start address for triggering described dma controller by described first configuration information, and the described target data read also is write described object start address for triggering described dma controller by described first configuration information, is sent to described target cache district to make described target data.
3. data processing method according to claim 2, is characterized in that, described first configuration information is sent to direct memory access dma controller after, described method also comprises:
The described target data being stored in described target cache district is processed;
Determine whether the described target data after processing changes;
If, then the second configuration information is sent to described dma controller, the described target data of having changed reads from described object start address for triggering described dma controller by described second configuration information, and the described target data of having changed also is write described source start address for triggering described dma controller by described second configuration information;
If not, then the described target data of not changing is discharged.
4. a data processing method, is characterized in that, comprising:
Receive the first configuration information;
According to described first configuration information target data pending in internal memory is sent to the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and central processor CPU is less than the path between described target data and CPU.
5. data processing method according to claim 4, is characterized in that, after described reception first configuration information, described method also comprises:
Read described first configuration information to obtain described target length, the source start address of described target data and the object start address in described target cache district;
According to described first configuration information, by target data pending in internal memory, the target cache district be sent in internal memory comprises:
Described target data is read from described source start address;
The described target data read is write described object start address, is sent to described target cache district to make described target data.
6. data processing method according to claim 5, is characterized in that, after described target cache district target data pending in internal memory be sent to according to described first configuration information in internal memory, described method also comprises:
Receive the second configuration information;
Determine that the described target data being stored in described target cache district is changed according to described second configuration information;
The described target data of having changed is read from described object start address;
The described target data of having changed is write described source start address.
7. a central processor CPU, is characterized in that, comprising:
First determining unit, for determining the target length of target data pending in internal memory;
Second determining unit, for determining the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the described target length of described target data, and the path between described target cache district and CPU is less than the path between described target data and CPU;
First transmitting element, for the first configuration information is sent to direct memory access dma controller, described target data is sent to described target cache district for triggering described dma controller by described first configuration information.
8. CPU according to claim 7, is characterized in that, also comprises:
3rd determining unit, for determining the source start address of described target data;
4th determining unit, for determining the object start address in described target cache district;
Generation unit, for generating described first configuration information including described target length, described source start address and described object start address, and described target data reads from described source start address for triggering described dma controller by described first configuration information, and the described target data read also is write described object start address for triggering described dma controller by described first configuration information, is sent to described target cache district to make described target data.
9. CPU according to claim 8, is characterized in that, also comprises:
Processing unit, for processing the described target data being stored in described target cache district;
5th determining unit, for determining whether the described target data after processing changes;
Second transmitting element, if change for described target data, then the second configuration information is sent to described dma controller, the described target data of having changed reads from described object start address for triggering described dma controller by described second configuration information, and the described target data of having changed also is write described source start address for triggering described dma controller by described second configuration information;
6th determining unit, if do not change for described target data, then discharges the described target data of not changing.
10. a direct memory access dma controller, is characterized in that, comprising:
First receiving element, for receiving the first configuration information;
Delivery unit, for according to described first configuration information target data pending in internal memory being sent to the target cache district in internal memory, described target cache district is the current idle condition not writing data, the storable data length in described target cache district is more than or equal to the target length of described target data, and the path between described target cache district and central processor CPU is less than the path between described target data and CPU.
11. dma controllers according to claim 10, is characterized in that, also comprise:
First reading unit, for reading described first configuration information to obtain described target length, the source start address of described target data and the object start address in described target cache district;
Described delivery unit comprises:
Read module, for reading described target data from described source start address;
Writing module, for the described target data read is write described object start address, is sent to described target cache district to make described target data.
12. dma controllers according to claim 11, is characterized in that, also comprise:
Second receiving element, for receiving the second configuration information;
According to described second configuration information, 7th determining unit, for determining that the described target data being stored in described target cache district is changed;
Reading unit, reads from described object start address for the described target data of will change;
Writing unit, for writing described source start address by the described target data of having changed.
13. 1 kinds of computer systems, is characterized in that, comprise the central processor CPU as described in any one of claim 7 to 9, dma controller as described in any one of claim 10 to 12 and internal memory;
Wherein, described internal memory is connected with described CPU and described dma controller by internal bus.
CN201510315790.7A 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system Active CN104965798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510315790.7A CN104965798B (en) 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510315790.7A CN104965798B (en) 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system

Publications (2)

Publication Number Publication Date
CN104965798A true CN104965798A (en) 2015-10-07
CN104965798B CN104965798B (en) 2018-03-09

Family

ID=54219834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510315790.7A Active CN104965798B (en) 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system

Country Status (1)

Country Link
CN (1) CN104965798B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018102967A1 (en) * 2016-12-05 2018-06-14 华为技术有限公司 Control method, storage device and system for data read/write command in nvme over fabric architecture
CN108885596A (en) * 2017-12-29 2018-11-23 深圳市大疆创新科技有限公司 Data processing method, equipment, dma controller and computer readable storage medium
CN109074335A (en) * 2017-12-29 2018-12-21 深圳市大疆创新科技有限公司 Data processing method, equipment, dma controller and computer readable storage medium
CN110419034A (en) * 2017-04-14 2019-11-05 华为技术有限公司 A kind of data access method and device
CN111581118A (en) * 2019-12-31 2020-08-25 北京忆芯科技有限公司 Computing acceleration system
CN114442909A (en) * 2020-11-04 2022-05-06 大唐移动通信设备有限公司 Data processing method and device
CN115454900A (en) * 2022-08-08 2022-12-09 北京阿帕科蓝科技有限公司 Data transmission method, data transmission device, computer equipment, storage medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4271466A (en) * 1975-02-20 1981-06-02 Panafacom Limited Direct memory access control system with byte/word control of data bus
US6128728A (en) * 1997-08-01 2000-10-03 Micron Technology, Inc. Virtual shadow registers and virtual register windows
CN102467472A (en) * 2010-11-08 2012-05-23 中兴通讯股份有限公司 System-on-chip (SoC) chip boot startup device and SoC chip
CN103713953A (en) * 2013-12-17 2014-04-09 上海华为技术有限公司 Device and method for transferring data in memory
US20140310467A1 (en) * 2011-10-28 2014-10-16 The Regents Of The University Of California Multiple-core computer processor for reverse time migration

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4271466A (en) * 1975-02-20 1981-06-02 Panafacom Limited Direct memory access control system with byte/word control of data bus
US6128728A (en) * 1997-08-01 2000-10-03 Micron Technology, Inc. Virtual shadow registers and virtual register windows
CN102467472A (en) * 2010-11-08 2012-05-23 中兴通讯股份有限公司 System-on-chip (SoC) chip boot startup device and SoC chip
US20140310467A1 (en) * 2011-10-28 2014-10-16 The Regents Of The University Of California Multiple-core computer processor for reverse time migration
CN103713953A (en) * 2013-12-17 2014-04-09 上海华为技术有限公司 Device and method for transferring data in memory

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119248A (en) * 2016-12-05 2019-08-13 华为技术有限公司 Control method, storage equipment and the system of reading and writing data order
CN110119248B (en) * 2016-12-05 2021-10-15 华为技术有限公司 Control method of data read-write command, storage device and system
WO2018102967A1 (en) * 2016-12-05 2018-06-14 华为技术有限公司 Control method, storage device and system for data read/write command in nvme over fabric architecture
US11144465B2 (en) 2017-04-14 2021-10-12 Huawei Technologies Co., Ltd. Data access method and apparatus
CN110419034A (en) * 2017-04-14 2019-11-05 华为技术有限公司 A kind of data access method and device
CN110419034B (en) * 2017-04-14 2021-01-08 华为技术有限公司 Data access method and device
WO2019127507A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Data processing method and device, dma controller, and computer readable storage medium
WO2019127517A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Data processing method and device, dma controller, and computer readable storage medium
CN109074335A (en) * 2017-12-29 2018-12-21 深圳市大疆创新科技有限公司 Data processing method, equipment, dma controller and computer readable storage medium
CN108885596A (en) * 2017-12-29 2018-11-23 深圳市大疆创新科技有限公司 Data processing method, equipment, dma controller and computer readable storage medium
CN111581118A (en) * 2019-12-31 2020-08-25 北京忆芯科技有限公司 Computing acceleration system
CN114442909A (en) * 2020-11-04 2022-05-06 大唐移动通信设备有限公司 Data processing method and device
CN115454900A (en) * 2022-08-08 2022-12-09 北京阿帕科蓝科技有限公司 Data transmission method, data transmission device, computer equipment, storage medium and program product

Also Published As

Publication number Publication date
CN104965798B (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN104965798A (en) Data processing method, related device and data processing system
CN113138801B (en) Command distribution device, method, chip, computer device and storage medium
CN105224444A (en) Daily record generation method and device
CN105487987A (en) Method and device for processing concurrent sequential reading IO (Input/Output)
CN113138802B (en) Command distribution device, method, chip, computer device and storage medium
US20170255249A1 (en) Apparatuses and methods of entering unselected memories into a different power mode during multi-memory operation
CN102566958A (en) Image segmentation processing device based on SGDMA (scatter gather direct memory access)
CN105677259A (en) Method for storing file in mobile terminal and mobile terminal
US9746897B2 (en) Method for controlling a multi-core central processor unit of a device establishing a relationship between device operational parameters and a number of started cores
US9223379B2 (en) Intelligent receive buffer management to optimize idle state residency
CN103873886A (en) Image information processing method, device and system
CN103905310A (en) Message processing method and forwarding device
CN105430028A (en) Service calling method, service providing method, and node
CN104270287A (en) Message disorder detecting method and device
CN103578077A (en) Image zooming method and related device
US20180143785A1 (en) Server system and reading method
US20140324368A1 (en) Test method, test system and electronic device employing the same
CN103391246A (en) Message processing method and device
CN102523112B (en) Information processing method and equipment
JP2022502949A (en) Downstream control channel transmission method, downlink control channel reception method, terminal and network side equipment
CN104391564A (en) Power consumption control method and device
CN105518617A (en) Caching data processing method and device
CN105117167A (en) Information processing method and apparatus and electronic device
CN105117358A (en) DMA (Direct Memory Access) data transmission method and apparatus
CN104918314A (en) Method and device for adjusting power consumption of AP

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant