CN104965798B - A kind of data processing method, relevant device and system - Google Patents

A kind of data processing method, relevant device and system Download PDF

Info

Publication number
CN104965798B
CN104965798B CN201510315790.7A CN201510315790A CN104965798B CN 104965798 B CN104965798 B CN 104965798B CN 201510315790 A CN201510315790 A CN 201510315790A CN 104965798 B CN104965798 B CN 104965798B
Authority
CN
China
Prior art keywords
target
target data
data
configuration information
cache area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510315790.7A
Other languages
Chinese (zh)
Other versions
CN104965798A (en
Inventor
袁张慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huawei Technologies Co Ltd
Original Assignee
Shanghai Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huawei Technologies Co Ltd filed Critical Shanghai Huawei Technologies Co Ltd
Priority to CN201510315790.7A priority Critical patent/CN104965798B/en
Publication of CN104965798A publication Critical patent/CN104965798A/en
Application granted granted Critical
Publication of CN104965798B publication Critical patent/CN104965798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the invention discloses a kind of data processing method,Relevant device and system,The data processing method includes the target length for determining target data pending in internal memory,Determine the target cache area in internal memory,The target cache area is the idle condition currently without write-in data,The storable data length in target cache area is more than or equal to the target length of the target data,And the path length between the target cache area and central processor CPU is less than the path length between the target data and CPU,First configuration information is sent to direct memory access dma controller,The target data is sent to the target cache area by first configuration information for triggering the dma controller,Efficiently the data in internal memory can be handled,Greatly reduce when handling data to the difficulty of scheduling memory,Simplify the process of scheduling memory.

Description

A kind of data processing method, relevant device and system
Technical field
The present invention relates to the communications field, more particularly to a kind of data processing method, relevant device and system.
Background technology
The computer system of prior art can be found in shown in Fig. 1, and central processor CPU can enter to the data for being stored in internal memory The action of row read/write, CPU and internal memory are attached by internal bus, specifically, internal memory of the prior art is drawn Multiple levels are divided, level is bigger, and memory size is bigger, but CPU access efficiencies are lower, and level is bigger, distance CPU path Length is bigger, in order to lift the efficiency of data processing, then when CPU is to level minimum and the progress of access efficiency highest memory layers level Divide multiplexing;
So that internal memory is divided into 3 internal memory levels shown in Fig. 1, wherein, the internal memory of internal memory level 1 is minimum, but CPU access Efficiency highest, the internal memory of internal memory level 3 is maximum, but the efficiency that CPU is accessed is minimum, such as positioned at internal memory level 3 and needs frequent The data of processing, multiple periods are divided into internal memory level 1, and CPU needs to cause internal memory level 1 in section T at a fixed time Internal memory as the data is used, and remaining period in internal memory level 1 beyond period T is then needed as other data Internal memory uses;
It can cause internally to deposit the difficulty being scheduled using the time-multiplexed mode of internal memory level shown in prior art, with And the coupling that internal memory uses, the efficiency of data processing is reduced, adds CPU burden.
The content of the invention
The embodiments of the invention provide a kind of data processing method, relevant device and system, can effectively reduce pair The complexity of scheduling memory;
First aspect of the embodiment of the present invention provides a kind of data processing method, including:
Determine the target length of target data pending in internal memory;
The target cache area in internal memory is determined, the target cache area is currently without the idle condition for writing data, institute The target length that the storable data length in target cache area is more than or equal to the target data is stated, and the target is delayed Deposit the path length that the path length between area and central processor CPU is less than between the target data and CPU;
First configuration information is sent to direct memory access dma controller, first configuration information is used to trigger institute State dma controller and the target data is sent to the target cache area.
With reference to the embodiment of the present invention in a first aspect, in the first implementation of first aspect of the embodiment of the present invention,
It is described first configuration information is sent to direct memory access dma controller before, methods described also includes:
Determine the source initial address of the target data;
Determine the purpose initial address in the target cache area;
Generation includes first configuration of the target length, the source initial address and the purpose initial address Information, and first configuration information reads the target data from the source initial address for triggering the dma controller Take, and first configuration information is additionally operable to trigger the dma controller by the target data read the write-in mesh Initial address so that the target data is sent to the target cache area.
With reference to the first implementation of first aspect of the embodiment of the present invention, second of first aspect of the embodiment of the present invention In implementation,
It is described first configuration information is sent to direct memory access dma controller after, methods described also includes:
The target data for being stored in the target cache area is handled;
It is determined that whether the target data after processing has changed;
If so, the second configuration information then is sent into the dma controller, second configuration information is described for triggering Dma controller reads the target data changed from the purpose initial address, and second configuration information is also used The target data changed is write into the source initial address in triggering the dma controller;
If it is not, then discharge the target data do not changed.
Second aspect of the embodiment of the present invention provides a kind of data processing method, including:
Receive the first configuration information;
The target cache area being sent to target data pending in internal memory according to first configuration information in internal memory, The target cache area is the idle condition currently without write-in data, and the storable data length in target cache area is more than Or the target length equal to the target data, and the path length between the target cache area and central processor CPU is small Path length between the target data and CPU.
With reference to second aspect of the embodiment of the present invention, in the first implementation of second aspect of the embodiment of the present invention,
After the first configuration information of the reception, methods described also includes:
First configuration information is read to obtain the target length, the source initial address of the target data and described The purpose initial address in target cache area;
The target cache area being sent to target data pending in internal memory according to first configuration information in internal memory Including:
The target data is read from the source initial address;
The target data read is write into the purpose initial address so that the target data be sent to it is described Target cache area.
With reference to the first implementation of second aspect of the embodiment of the present invention, second of second aspect of the embodiment of the present invention In implementation,
The target that target data pending in internal memory is sent in internal memory according to first configuration information is delayed After depositing area, methods described also includes:
Receive the second configuration information;
Changed according to the target data that second configuration information determines to be stored in the target cache area;
The target data changed is read from the purpose initial address;
The target data changed is write into the source initial address.
The third aspect of the embodiment of the present invention provides a kind of central processor CPU, including:
First determining unit, for determining the target length of target data pending in internal memory;
Second determining unit, for determining the target cache area in internal memory, the target cache area is currently without write-in The idle condition of data, the storable data length in target cache area are more than or equal to the target of the target data Length, and the path length between the target cache area and CPU is less than the path length between the target data and CPU;
First transmitting element, for the first configuration information to be sent into direct memory access dma controller, described first matches somebody with somebody Confidence ceases is sent to the target cache area for triggering the dma controller by the target data.
With reference to the third aspect of the embodiment of the present invention, in the first implementation of the third aspect of the embodiment of the present invention, also wrap Include:
3rd determining unit, for determining the source initial address of the target data;
4th determining unit, for determining the purpose initial address in the target cache area;
Generation unit, include the target length, the source initial address and the purpose initial address for generating First configuration information, and first configuration information is used to trigger the dma controller by the target data from institute State source initial address to read, and first configuration information is additionally operable to trigger the target that the dma controller will have been read Data write the purpose initial address, so that the target data is sent to the target cache area.
With reference to the first implementation of the third aspect of the embodiment of the present invention, second of the third aspect of the embodiment of the present invention In implementation, in addition to:
Processing unit, for handling the target data for being stored in the target cache area;
Whether the 5th determining unit, the target data after being handled for determination have changed;
Second transmitting element, if having been changed for the target data, the second configuration information is sent to the DMA and controlled Device processed, second configuration information are used to trigger the dma controller by the target data changed from the purpose Beginning, address read, and second configuration information is additionally operable to trigger the dma controller and writes the target data changed Enter the source initial address;
6th determining unit, if not changed for the target data, discharge the target data do not changed.
Fourth aspect of the embodiment of the present invention provides a kind of direct memory access dma controller, including:
First receiving unit, for receiving the first configuration information;
Delivery unit, for target data pending in internal memory to be sent in internal memory according to first configuration information Target cache area, the target cache area be currently without write-in data idle condition, the target cache area can store Data length be more than or equal to the target length of the target data, and the target cache area and central processor CPU it Between path length be less than path length between the target data and CPU.
With reference to fourth aspect of the embodiment of the present invention, in the first implementation of fourth aspect of the embodiment of the present invention, also wrap Include:
First reading unit, for reading first configuration information to obtain the target length, the target data Source initial address and the target cache area purpose initial address;
The delivery unit includes:
Read module, for the target data to be read from the source initial address;
Writing module, for the target data read to be write into the purpose initial address, so that the target Data are sent to the target cache area.
With reference to the first implementation of fourth aspect of the embodiment of the present invention, second of fourth aspect of the embodiment of the present invention In implementation, in addition to:
Second receiving unit, for receiving the second configuration information;
7th determining unit, for being determined to be stored in the mesh in the target cache area according to second configuration information Mark data have been changed;
Reading unit, for the target data changed to be read from the purpose initial address;
Writing unit, for the target data changed to be write into the source initial address.
The aspect of the embodiment of the present invention the 5th provides a kind of computer system, including such as the third aspect of the embodiment of the present invention extremely CPU described in second of any one of implementation of the third aspect of the embodiment of the present invention, such as fourth aspect of the embodiment of the present invention are extremely Dma controller and internal memory described in second of any one of implementation of fourth aspect of the embodiment of the present invention;
Wherein, the internal memory is connected by internal bus with the CPU and the dma controller.
The embodiment of the invention discloses a kind of data processing method, relevant device and system, the data processing method Target length including determining target data pending in internal memory, determines the target cache area in internal memory, the target cache Area is the idle condition currently without write-in data, and the storable data length in target cache area is more than or equal to the mesh The target length of data is marked, and the path length between the target cache area and CPU is less than the target data and CPU Between path length, the first configuration information is sent to direct memory access dma controller, first configuration information is used for Trigger the dma controller and the target data is sent to the target cache area, efficiently the data in internal memory can be entered Row processing, greatly reduce when handling data to the difficulty of scheduling memory, simplify the process of scheduling memory, simplify CPU design difficulty, reduces the occupancy to internal memory in data handling procedure.
Brief description of the drawings
The structural representation for the computer system that Fig. 1 is provided by prior art;
Fig. 2 is a kind of structural representation of the computer system in the embodiment of the present invention;
Fig. 3 is a kind of flow chart of steps of data processing method provided by the present invention;
Fig. 4 is another flow chart of steps of data processing method provided by the present invention;
Fig. 5 is another flow chart of steps of data processing method provided by the present invention;
Fig. 6 is another flow chart of steps of data processing method provided by the present invention;
Fig. 7 is a kind of CPU provided by the present invention structural schematic block diagram;
Fig. 8 is CPU provided by the present invention another structural schematic block diagram;
Fig. 9 is a kind of structural schematic block diagram of dma controller provided by the present invention;
Figure 10 is another structural schematic block diagram of dma controller provided by the present invention;
Figure 11 is a kind of structural schematic block diagram of computer system provided by the present invention.
Embodiment
Data processing method shown in embodiment for a better understanding of the present invention, first to that can realize shown in the present embodiment The structure of computer system of data processing method be described in detail:
It is shown in Figure 2, computer system shown in the present embodiment include central processor CPU 201, internal memory 202 with And direct memory access dma controller 203, and the CPU201 and the internal memory 202 pass through with the dma controller 203 Internal bus is attached, and the dma controller 203 can realize that data are transmitted in internal memory under the control of the CPU201 Being carried out between 202 and dma controller 203, that is to say, that dma controller 203 can write data into/read internal memory 202, and CPU201 computing is required no, so that computer system processor speed is accelerated, effectively improves the effect of data transfer Energy;
And the internal memory 202 has divided multiple levels, level is bigger, and memory size is bigger, but CPU201 access efficiencies are got over It is low, and level is bigger, distance CPU path length is bigger.
Specific to the present embodiment, the data processing method is shown in Figure 3;
301st, the target length of target data pending in internal memory is determined;
In the present embodiment, CPU determines the target length for the target data that the needs being stored in internal memory are handled.
In the present embodiment, the target data can be the data that the needs that the CPU is determined frequently are handled, so that institute Efficient process to the target data can not be ensured by stating the currently located internal memory level of target data.
302nd, the target cache area in internal memory is determined;
In the present embodiment, the internal memory level that the target cache area determined by CPU is located at is less than the target data The internal memory level being located at, i.e., the path length between described target cache area and CPU are less than between the target data and CPU Path length;
The target cache area is the idle condition currently without write-in data;
And the storable data length in target cache area is more than or equal to the target length of the target data;
I.e. in the present embodiment, the CPU opens up the slow of one or more snippets free time in the high internal memory level of CPU access efficiencies Area is deposited, and buffer area of the storable data length more than or equal to the target length of the target data is defined as institute Target cache area is stated, so that data can efficiently be handled in the target cache area.
Specific to shown in Fig. 2, the internal memory shown in the present embodiment is divided into 3 internal memory levels (internal memory level 1, internal memory level 2 And internal memory level 3), need it is clear that, the specific number that the present embodiment divides to the internal memory level does not limit, this implementation Example is only illustrated exemplified by 3.
Need it is clear that, the internal memory level and the target data that the present embodiment is specifically located to the target cache area The internal memory level being located at does not limit, as long as the internal memory level that the target cache area is located at is less than the target data institute The internal memory level being located at, for example, the target cache area is located at the internal memory level 1 shown in Fig. 2, the target data is located at The internal memory level 2 or 3.
303rd, the first configuration information is sent to direct memory access dma controller.
CPU, which generates, is sent to the target data described in the target cache area for triggering the dma controller First configuration information;
First configuration information is sent to the dma controller by CPU, so that the dma controller can be according to institute State the first configuration information and the target data is sent to the target cache area.
The present embodiment is not limited the specifying information included of first configuration information, as long as the DMA is controlled The target data can be sent to the target cache area by device according to first configuration information.
Data processing method shown in the present embodiment can be applicable on BBP, and solving computer system is not having During hardware cache (Cache), efficiently the data in internal memory can also be handled, greatly reduced right To the difficulty of scheduling memory when data are handled, the process of scheduling memory is simplified, simplifies CPU in data handling procedure Design difficulty, reduce the occupancy to internal memory.
Certainly, only a kind of example of application of the data processing method shown in the present embodiment on BBP, no Limit, also can be applicable to various communication networks, such as the BTS of GSM network, the NodeB of UMTS network and LTE network ENodeB etc..
The data processing method shown in the present embodiment is carried out below in conjunction with concrete application scene further detailed Explanation:
As shown in figure 4, the data processing method includes:
401st, the target length of target data pending in internal memory is determined;
Specifically step 301 please as shown in Figure 3, do not repeat in the present embodiment specifically;
The present embodiment is illustrated by taking the internal memory level 3 that the target data is located at shown in Fig. 2 as an example;
402nd, the source initial address of the target data is determined;
CPU determines the source initial address of the target data;
403rd, the target cache area in internal memory is determined;
Specifically step 302 please in detail as shown in Figure 3, do not repeat in the present embodiment.
404th, the purpose initial address in the target cache area is determined;
The present embodiment is illustrated by taking the internal memory level 1 that the target cache area is located at shown in Fig. 2 as an example;
405th, generation includes described the first of the target length, the source initial address and the purpose initial address Configuration information;
CPU is according to generating the fixed target length, the source initial address and the purpose initial address First configuration information.
406th, first configuration information is sent to the dma controller;
Wherein, first configuration information originates the target data from the source for triggering the dma controller Address is read, and first configuration information is additionally operable to trigger the dma controller by the target data read write-in The purpose initial address, so that the target data is sent to the target cache area.
I.e. in the application scenarios of the present embodiment, first configuration information that the CPU is generated enables to described Dma controller reads the target data being located in the internal memory level 3, and the target data is write in internal memory level 1 The fixed target cache area, so that the CPU can be accessed with efficient access efficiency is stored in the internal memory level 1 In the target cache area in the target data.
Realize that the false code of step 401 to the process shown in step 406 can be as follows:
CacheAddr=SoftCacheAccess
407th, the target data for being stored in the target cache area is handled;
CPU can be handled the target data for being stored in the target cache area.
Realize that the false code of the process shown in step 407 can be as follows:
Process(CacheAddr)
408th, whether the target data after determination processing has changed, if so, step 409 is then carried out, if it is not, then carrying out Step 410;
After the target datas of the CPU in the target cache area is handled, the CPU determines the number of targets According to whether having been changed by the CPU;
Such as the CPU determines whether to write the target data, if so, then illustrating the target data Change, if it is not, then illustrating that the target data is not changed.
Certain change mode of the present embodiment to the target data does not limit.
409th, the second configuration information is sent to the dma controller;
CPU is after it is determined that the target data changed, then the CPU generates second configuration information;
Second configuration information is used to trigger the dma controller by the target data changed from the purpose Initial address is read, and second configuration information is additionally operable to trigger the target data that the dma controller will have been changed Write the source initial address;
In the target data changed is written back to by i.e. described dma controller again according to second configuration information Deposit level 3.
For realizing that the false code of the step is as follows:
SoftCacheWriteBack (CacheAddr, dirtyFlag)
410th, the target data do not changed is discharged.
The i.e. described CPU target datas that directly control is located in memory layers level 1 are discharged.
Data processing method shown in the present embodiment can be applicable on BBP, and solving computer system is not having During hardware cache (Cache), efficiently the data in internal memory can be handled, greatly reduced in logarithm According to the process that scheduling memory to the difficulty of scheduling memory, is simplified when being handled, setting for CPU in data handling procedure is simplified Difficulty is counted, reduces the occupancy to internal memory.
Above from CPU angle to how to realize that the data processing method of simplified scheduling memory is described in detail, with Under with the scheduling of dma controller to how to realize that the data processing method of simplified scheduling memory is described in detail:
Wherein, the concrete structure of the computer system for realizing the data processing method shown in Fig. 5 can be found in Fig. 2 It is shown, Fig. 2 explanation please be participated in shown in above-described embodiment, not repeated in the present embodiment specifically.
It is shown in Figure 5 that the data processing method is described in detail:
501st, the first configuration information is received;
It is that first configuration information is generated by the CPU and for triggering the dma controller by the target data It is sent to the configuration information in the target cache area;
Wherein, the specific generating process of the first configuration information please be referred to shown in above-described embodiment, specifically in the present embodiment In do not repeat.
502nd, the target that target data pending in internal memory is sent in internal memory according to first configuration information is delayed Deposit area;
Target data pending in internal memory is sent in internal memory by the dma controller according to first configuration information Target cache area.
Wherein, the target cache area is the idle condition currently without write-in data, and the target cache area can store Data length be more than or equal to the target length of the target data, and the path length between the target cache area and CPU Degree is less than the path length between the target data and CPU.
I.e. in the present embodiment, the target in internal memory level that the dma controller can be low by CPU access efficiencies are stored in Data are sent in the high internal memory level of the CPU access efficiencies;
Need it is clear that, the internal memory level and the target data that the present embodiment is specifically located to the target cache area The internal memory level being located at does not limit, as long as the internal memory level that the target cache area is located at is less than the target data institute The internal memory level being located at, for example, the dma controller will transmit positioned at the target data of the internal memory level 2 or 3 To the target cache area of internal memory level 1, effectively to lift the treatment effeciency of target data.
Data processing method shown in the present embodiment can be applicable on BBP, and solving computer system is not having During hardware cache (Cache), efficiently the data in internal memory can be handled, greatly reduced in logarithm According to the process that scheduling memory to the difficulty of scheduling memory, is simplified when being handled, setting for CPU in data handling procedure is simplified Difficulty is counted, reduces the occupancy to internal memory.
Certainly, only a kind of example of application of the data processing method shown in the present embodiment on BBP, no Limit, also can be applicable to various communication networks, such as the BTS of GSM network, the NodeB of UMTS network and LTE network ENodeB etc..
The data processing method shown in the present embodiment is carried out below in conjunction with concrete application scene further detailed Explanation:
As shown in fig. 6, the data processing method includes:
601st, the first configuration information is received;
Specifically step 501 please as shown in Figure 5, do not repeat in the present embodiment specifically;
602nd, read first configuration information with obtain the target length, the source initial address of the target data and The purpose initial address in the target cache area;
The dma controller reads first configuration information to obtain the source of the target length, the target data Initial address and the purpose initial address in the target cache area.
Wherein, the CPU specifically how to generate include the target length, the source initial address of the target data and First configuration information of the purpose initial address in the target cache area please be referred to shown in above-described embodiment, specifically at this Do not repeated in embodiment.
The present embodiment is illustrated by taking the internal memory level 1 that the target cache area is located at shown in Fig. 2 as an example, then described Dma controller can determine the mesh in the target cache area being located in internal memory level 1 according to first configuration information Initial address;
The present embodiment is illustrated by taking the internal memory level 3 that the target data is located at shown in Fig. 2 as an example, then described Dma controller can determine the target for the target data being located in internal memory level 3 according to first configuration information The source initial address of length and the target data.
603rd, the target data is read from the source initial address;
The dma controller can read the target data from the source initial address.
604th, the target data read is write into the purpose initial address;
The target data read can be write the purpose initial address by the dma controller, so that described Target data is sent to the target cache area.
I.e. in the application scenarios of the present embodiment, the dma controller can read position according to first configuration information The target data in the internal memory level 3, and the target data is write into the fixed target in internal memory level 1 Buffer area, so that the CPU can access the target cache being stored in the internal memory level 1 with efficient access efficiency The target data in area.
Realize that the false code of step 601 to the process shown in step 604 can be as follows:
CacheAddr=SoftCacheAccess
605th, the second configuration information is received;
If the CPU is handled the target data for being stored in the target cache area, and the CPU determines institute Stating whether target data has been changed by the CPU, then the CPU correspondingly generates second configuration information, wherein, described second The specific generating process of configuration information please be referred to shown in above-described embodiment, do not repeated in the present embodiment specifically.
606th, the target data for determining to be stored in the target cache area according to second configuration information has been changed;
In the present embodiment, if the dma controller receives second configuration information and can determine that positioned at the target The target data of buffer area is changed by the CPU.
607th, the target data changed is read from the purpose initial address;
The dma controller reads the target data changed from the purpose initial address.
608th, the target data changed is write into the source initial address.
The target data changed is write the source initial address by the dma controller;
I.e. in the present embodiment, the dma controller will be located at the memory layers after second configuration information is received The target data in the target cache area of level 1 is read, and the target data is written back into the internal memory level 3.
Data processing method shown in the present embodiment can be applicable on BBP, and solving computer system is not having During hardware cache (Cache), efficiently the data in internal memory can be handled, greatly reduced in logarithm According to the process that scheduling memory to the difficulty of scheduling memory, is simplified when being handled, setting for CPU in data handling procedure is simplified Difficulty is counted, reduces the occupancy to internal memory.
It is described in detail below in conjunction with the structure of the CPU shown in Fig. 7 to above-mentioned data processing method can be realized;
The CPU includes:
First determining unit 701, for determining the target length of target data pending in internal memory;
Second determining unit 702, for determining the target cache area in internal memory, the target cache area is currently without writing Enter the idle condition of data, the storable data length in target cache area is more than or equal to the mesh of the target data Length is marked, and the path length between the target cache area and CPU is less than the path length between the target data and CPU Degree;
First transmitting element 703, for the first configuration information to be sent into direct memory access dma controller, described The target data is sent to the target cache area by one configuration information for triggering the dma controller.
CPU shown in the present embodiment can be applicable on BBP, solve computer system in no High-Speed Hardware During buffer storage (Cache), efficiently the data in internal memory can be handled, greatly reduced to data To the difficulty of scheduling memory during reason, the process of scheduling memory is simplified, simplifies the design difficulty of CPU in data handling procedure, Reduce the occupancy to internal memory.
Certainly, only a kind of example of applications of the CPU shown in the present embodiment on BBP, is not limited, also It can be applicable to various communication networks, such as the BTS of GSM network, the NodeB of UMTS network and LTE network eNodeB etc..
The concrete structure of the CPU is described in further detail below in conjunction with shown in Fig. 8:
The CPU includes:
First determining unit 801, for determining the target length of target data pending in internal memory;
3rd determining unit 802, for determining the source initial address of the target data;
Second determining unit 803, for determining the target cache area in internal memory;
4th determining unit 804, for determining the purpose initial address in the target cache area;
Generation unit 805, include the target length, the source initial address and the purpose starting point for generating First configuration information of location, and first configuration information be used for trigger the dma controller by the target data from The source initial address is read, and first configuration information is additionally operable to trigger the mesh that the dma controller will have been read Mark data and write the purpose initial address, so that the target data is sent to the target cache area.
First transmitting element 806, for the first configuration information to be sent into direct memory access dma controller, described The target data is sent to the target cache area by one configuration information for triggering the dma controller;
Processing unit 807, for handling the target data for being stored in the target cache area;
Whether the 5th determining unit 808, the target data after being handled for determination have changed;
Second transmitting element 809, if having been changed for the target data, the second configuration information is sent to described Dma controller, second configuration information are used to trigger the dma controller by the target data changed from the mesh Initial address read, and second configuration information is additionally operable to trigger the number of targets that the dma controller will have been changed According to the write-in source initial address;
6th determining unit 810, if not changed for the target data, discharge the target data do not changed.
The CPU shown in the present embodiment solves computer system at no hardware cache (Cache) When, efficiently the data in internal memory can be handled, greatly reduced when handling data to scheduling memory Difficulty, the process of scheduling memory is simplified, simplify the design difficulty of CPU in data handling procedure, reduce and internal memory is accounted for With rate.
Carried out in detail below in conjunction with the concrete structure of the dma controller shown in Fig. 9 to above-mentioned data processing method can be realized Describe in detail bright:
The dma controller includes:
First receiving unit 901, for receiving the first configuration information;
Delivery unit 902, in target data pending in internal memory is sent to according to first configuration information Target cache area in depositing, the target cache area are the idle condition currently without write-in data, and the target cache area can The data length of storage is more than or equal to the target length of the target data, and the road between the target cache area and CPU Electrical path length is less than the path length between the target data and CPU.
Dma controller shown in the present embodiment can be applicable on BBP, solve computer system not hard During part cache memory (Cache), efficiently the data in internal memory can be handled, greatly reduced to data To the difficulty of scheduling memory when being handled, the process of scheduling memory is simplified, simplifies the design of CPU in data handling procedure Difficulty, reduce the occupancy to internal memory.
Certainly, only a kind of example of application of the dma controller shown in the present embodiment on BBP, is not limited It is fixed, it also can be applicable to various communication networks, such as the BTS of GSM network, the NodeB of UMTS network and LTE network eNodeB Deng.
The concrete structure of the dma controller is described in further detail below in conjunction with shown in Figure 10:
The dma controller includes:
First receiving unit 1001, for receiving the first configuration information;
First reading unit 1002, for reading first configuration information to obtain the target length, the target The source initial address of data and the purpose initial address in the target cache area;
Delivery unit 1003, in target data pending in internal memory is sent to according to first configuration information Target cache area in depositing, the target cache area are the idle condition currently without write-in data, and the target cache area can The data length of storage is more than or equal to the target length of the target data, and the road between the target cache area and CPU Electrical path length is less than the path length between the target data and CPU;
Specifically, the delivery unit 1003 includes:
Read module 10031, for the target data to be read from the source initial address;
Writing module 10032, for the target data read to be write into the purpose initial address, so that described Target data is sent to the target cache area;
Second receiving unit 1004, for receiving the second configuration information;
7th determining unit 1005, for being determined to be stored in the institute in the target cache area according to second configuration information Target data is stated to have changed;
Reading unit 1006, for the target data changed to be read from the purpose initial address;
Writing unit 1007, for the target data changed to be write into the source initial address.
The dma controller shown in the present embodiment solves computer system in no hardware cache (Cache) when, efficiently the data in internal memory can be handled, greatly reduced when handling data to internal memory The difficulty of scheduling, the process of scheduling memory is simplified, simplify the design difficulty of CPU in data handling procedure, reduced internally The occupancy deposited.
As shown in figure 11, the embodiment of the present invention additionally provides a kind of computer system, and the computer system includes:
CPU1101, the CPU1101 concrete structure are referred to shown in Fig. 7 to Fig. 8, are not done in the present embodiment specifically Repeat;
Dma controller 1102, the concrete structure of the dma controller 1102 are referred to shown in Fig. 9 to Figure 10, specifically at this Do not repeated in embodiment;
Internal memory 1103, the internal memory 1103 are connected by internal bus and the CPU1101 and the dma controller 1102 Connect.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Division, only a kind of division of logic function, can there is other dividing mode, such as multiple units or component when actually realizing Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or The mutual coupling discussed or direct-coupling or communication connection can be the indirect couplings by some interfaces, device or unit Close or communicate to connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part to be contributed in other words to prior art or all or part of the technical scheme can be in the form of software products Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment methods described of the present invention Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
Described above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to before Embodiment is stated the present invention is described in detail, it will be understood by those within the art that:It still can be to preceding State the technical scheme described in each embodiment to modify, or equivalent substitution is carried out to which part technical characteristic;And these Modification is replaced, and the essence of appropriate technical solution is departed from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (13)

  1. A kind of 1. data processing method, it is characterised in that including:
    Determine the target length of target data pending in internal memory;
    The target cache area in internal memory is determined, the target cache area is currently without the idle condition for writing data, the mesh Mark the target length that the storable data length of buffer area is more than or equal to the target data, and the target cache area Path length between central processor CPU is less than the path length between the target data and CPU;
    First configuration information is sent to direct memory access dma controller, first configuration information is used to trigger the DMA The target data is sent to the target cache area by controller.
  2. 2. data processing method according to claim 1, it is characterised in that described to be sent to the first configuration information directly Before internal storage access dma controller, methods described also includes:
    Determine the source initial address of the target data;
    Determine the purpose initial address in the target cache area;
    Generation includes described the first of the target length, the source initial address and the purpose initial address and matches somebody with somebody confidence Breath, and first configuration information reads the target data from the source initial address for triggering the dma controller, And first configuration information is additionally operable to trigger the dma controller by the target data read the write-in purpose Beginning address so that the target data is sent to the target cache area.
  3. 3. data processing method according to claim 2, it is characterised in that described to be sent to the first configuration information directly After internal storage access dma controller, methods described also includes:
    The target data for being stored in the target cache area is handled;
    It is determined that whether the target data after processing has changed;
    If so, the second configuration information then is sent into the dma controller, second configuration information is used to trigger the DMA Controller reads the target data changed from the purpose initial address, and second configuration information is additionally operable to touch Send out dma controller described and the target data changed is write into the source initial address;
    If it is not, then discharge the target data do not changed.
  4. A kind of 4. data processing method, it is characterised in that including:
    Receive the first configuration information;
    The target cache area being sent to target data pending in internal memory according to first configuration information in internal memory, it is described Target cache area is the idle condition currently without write-in data, and the storable data length in target cache area is more than or waited In the target length of the target data, and the path length between the target cache area and central processor CPU is less than institute State the path length between target data and CPU.
  5. 5. data processing method according to claim 4, it is characterised in that after the first configuration information of the reception, institute Stating method also includes:
    First configuration information is read to obtain the target length, the source initial address of the target data and the target The purpose initial address of buffer area;
    The target cache area that target data pending in internal memory is sent in internal memory is included according to first configuration information:
    The target data is read from the source initial address;
    The target data read is write into the purpose initial address, so that the target data is sent to the target Buffer area.
  6. 6. data processing method according to claim 5, it is characterised in that described according in first configuration information general After depositing the target cache area that pending target data is sent in internal memory, methods described also includes:
    Receive the second configuration information;
    Changed according to the target data that second configuration information determines to be stored in the target cache area;
    The target data changed is read from the purpose initial address;
    The target data changed is write into the source initial address.
  7. A kind of 7. central processor CPU, it is characterised in that including:
    First determining unit, for determining the target length of target data pending in internal memory;
    Second determining unit, for determining the target cache area in internal memory, the target cache area is currently without write-in data Idle condition, the storable data length in target cache area be more than or equal to the target data the target grow Degree, and the path length between the target cache area and CPU is less than the path length between the target data and CPU;
    First transmitting element, for the first configuration information to be sent into direct memory access dma controller, described first matches somebody with somebody confidence Cease and the target data is sent to the target cache area for triggering the dma controller.
  8. 8. CPU according to claim 7, it is characterised in that also include:
    3rd determining unit, for determining the source initial address of the target data;
    4th determining unit, for determining the purpose initial address in the target cache area;
    Generation unit, the institute of the target length, the source initial address and the purpose initial address is included for generating State the first configuration information, and first configuration information is used to triggering the dma controller by the target data from the source Initial address is read, and first configuration information is additionally operable to trigger the target data that the dma controller will have been read The purpose initial address is write, so that the target data is sent to the target cache area.
  9. 9. CPU according to claim 8, it is characterised in that also include:
    Processing unit, for handling the target data for being stored in the target cache area;
    Whether the 5th determining unit, the target data after being handled for determination have changed;
    Second transmitting element, if having been changed for the target data, the second configuration information is sent to the DMA and controlled Device, second configuration information originate the target data changed from the purpose for triggering the dma controller Address is read, and second configuration information is additionally operable to trigger the dma controller by the target data changed write-in The source initial address;
    6th determining unit, if not changed for the target data, discharge the target data do not changed.
  10. A kind of 10. direct memory access dma controller, it is characterised in that including:
    First receiving unit, for receiving the first configuration information;
    Delivery unit, for the mesh being sent to target data pending in internal memory according to first configuration information in internal memory Buffer area is marked, the target cache area is currently without the idle condition for writing data, the storable number in target cache area It is more than or equal to the target length of the target data according to length, and between the target cache area and central processor CPU Path length is less than the path length between the target data and CPU.
  11. 11. dma controller according to claim 10, it is characterised in that also include:
    First reading unit, for reading first configuration information to obtain the source of the target length, the target data Initial address and the purpose initial address in the target cache area;
    The delivery unit includes:
    Read module, for the target data to be read from the source initial address;
    Writing module, for the target data read to be write into the purpose initial address, so that the target data It is sent to the target cache area.
  12. 12. dma controller according to claim 11, it is characterised in that also include:
    Second receiving unit, for receiving the second configuration information;
    7th determining unit, for being determined to be stored in the number of targets in the target cache area according to second configuration information According to having changed;
    Reading unit, for the target data changed to be read from the purpose initial address;
    Writing unit, for the target data changed to be write into the source initial address.
  13. A kind of 13. computer system, it is characterised in that including the central processor CPU as described in any one of claim 7 to 9, Dma controller and internal memory as described in any one of claim 10 to 12;
    Wherein, the internal memory is connected by internal bus with the CPU and the dma controller.
CN201510315790.7A 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system Active CN104965798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510315790.7A CN104965798B (en) 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510315790.7A CN104965798B (en) 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system

Publications (2)

Publication Number Publication Date
CN104965798A CN104965798A (en) 2015-10-07
CN104965798B true CN104965798B (en) 2018-03-09

Family

ID=54219834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510315790.7A Active CN104965798B (en) 2015-06-10 2015-06-10 A kind of data processing method, relevant device and system

Country Status (1)

Country Link
CN (1) CN104965798B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018102967A1 (en) * 2016-12-05 2018-06-14 华为技术有限公司 Control method, storage device and system for data read/write command in nvme over fabric architecture
WO2018188084A1 (en) * 2017-04-14 2018-10-18 华为技术有限公司 Data access method and device
WO2019127517A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Data processing method and device, dma controller, and computer readable storage medium
WO2019127507A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Data processing method and device, dma controller, and computer readable storage medium
CN112948282A (en) * 2019-12-31 2021-06-11 北京忆芯科技有限公司 Computing acceleration system for fast data search
CN114442909A (en) * 2020-11-04 2022-05-06 大唐移动通信设备有限公司 Data processing method and device
CN115454900A (en) * 2022-08-08 2022-12-09 北京阿帕科蓝科技有限公司 Data transmission method, data transmission device, computer equipment, storage medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4271466A (en) * 1975-02-20 1981-06-02 Panafacom Limited Direct memory access control system with byte/word control of data bus
US6128728A (en) * 1997-08-01 2000-10-03 Micron Technology, Inc. Virtual shadow registers and virtual register windows
CN102467472A (en) * 2010-11-08 2012-05-23 中兴通讯股份有限公司 System-on-chip (SoC) chip boot startup device and SoC chip
CN103713953A (en) * 2013-12-17 2014-04-09 上海华为技术有限公司 Device and method for transferring data in memory

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2771721A4 (en) * 2011-10-28 2017-03-29 The Regents of The University of California Multiple-core computer processor for reverse time migration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4271466A (en) * 1975-02-20 1981-06-02 Panafacom Limited Direct memory access control system with byte/word control of data bus
US6128728A (en) * 1997-08-01 2000-10-03 Micron Technology, Inc. Virtual shadow registers and virtual register windows
CN102467472A (en) * 2010-11-08 2012-05-23 中兴通讯股份有限公司 System-on-chip (SoC) chip boot startup device and SoC chip
CN103713953A (en) * 2013-12-17 2014-04-09 上海华为技术有限公司 Device and method for transferring data in memory

Also Published As

Publication number Publication date
CN104965798A (en) 2015-10-07

Similar Documents

Publication Publication Date Title
CN104965798B (en) A kind of data processing method, relevant device and system
US9921750B2 (en) Solid state drive (SSD) memory cache occupancy prediction
CN109558344B (en) DMA transmission method and DMA controller suitable for network transmission
US8935411B2 (en) Method and apparatus for utilizing advertisements to provide information regarding connection setup
US10116746B2 (en) Data storage method and network interface card
CN101324869B (en) Multiplexor based on AXI bus
CN103064807A (en) Multi-channel direct memory access controller
CN103095686A (en) Hot metadata access control method and server
CN103237296A (en) Message sending method and message sending system
CN106326140A (en) Data copying method, direct memory access controller and computer system
CN106775477B (en) SSD (solid State disk) master control data transmission management device and method
CN102566958A (en) Image segmentation processing device based on SGDMA (scatter gather direct memory access)
CN103678573A (en) Method and system for achieving cache acceleration
CN104040506B (en) Equilibrium uses the bandwidth of multiple requesters of shared accumulator system
CN108351836A (en) With the multi-stage non-volatile caching selectively stored
CN115102908A (en) Method for generating network message based on bandwidth control and related device
EP2620876B1 (en) Method and apparatus for data processing, pci-e bus system and server
CN104571957B (en) A kind of method for reading data and assembling device
CN111352869B (en) Data transmission method and device and storage medium
CN107241788A (en) The power consumption control method and device of wearable device
CN105912477B (en) A kind of method, apparatus and system that catalogue is read
US10250515B2 (en) Method and device for forwarding data messages
CN110188066B (en) FPGA aiming at large-capacity data and FPGA algorithm based on opencl
CN103516812A (en) Method for accelerating cloud storage internal data transmission
CN106155626A (en) One Android Sparse form image download method fast and effectively

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant