CN109324982A - A kind of data processing method and data processing equipment - Google Patents
A kind of data processing method and data processing equipment Download PDFInfo
- Publication number
- CN109324982A CN109324982A CN201710640687.9A CN201710640687A CN109324982A CN 109324982 A CN109324982 A CN 109324982A CN 201710640687 A CN201710640687 A CN 201710640687A CN 109324982 A CN109324982 A CN 109324982A
- Authority
- CN
- China
- Prior art keywords
- data block
- inactive
- processing equipment
- inactive data
- data processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The embodiment of the present application discloses a kind of data processing method and data processing equipment, can be improved data access efficiency, reduces power consumption.The embodiment of the present application method includes: that data processing equipment receives processor configuration information, the configuration information is used to indicate the data processing equipment and calls inactive data block, and the configuration information includes the call instruction format of the source address of the inactive data block, the destination address of the inactive data block, the inactive data block size and the inactive data block;The data processing equipment carries out partition by fine granularities according to the inactive data block size and determines the number of transmissions;The data processing equipment calls the inactive data block further according to the call instruction format of the number of transmissions, the source address of the inactive data block and the inactive data block;The inactive data block is stored in the destination address of the inactive data block by the data processing equipment.
Description
Technical field
This application involves the communications field more particularly to a kind of data processing method and data processing equipments.
Background technique
Current processor mostly uses caching (Cache) structure as local cache, for solving memory (memory) long distance
The problem of substantially deteriorating from access bring performance, and using the spatial locality and temporal locality of data access to Cache
Structure done it is continuous optimization and it is perfect, support cache lines (Cache line) software and hardware prefetches, consistency accesses, mention significantly
Software flexibility and memory access performance are risen.
A kind of existing method that data processing is carried out using Cache are as follows: by the number to be visited in Cache access process
According to reading in local memory, show that the data have stored in local memory by the way that corresponding mark is arranged;When visiting again
When asking the data, since the data have stored in local memory, so being realized without accessing external memory unit again
High speed access.
However, requiring to be compared with multiple address tags in Cache access process every time, judged with this to be visited
Whether data are effective, and data access efficiency is not high, and one side power consumption is excessive, on the other hand exist and determine data invalid to be visited
Situation causes the access probability of data to be visited not high, so that redundancy access amount is big, further increases power consumption.
Summary of the invention
The embodiment of the present application provides a kind of data processing method and data processing equipment, can be improved data access effect
Rate reduces power consumption.
In view of this, the application first aspect provides a kind of data processing method, it may include: outside software needs to access
When portion space consecutive data block or a small amount of individual data, processor rapidly configuring data processing unit relevant information, example
Such as, the source address of inactive data block, the calling of the destination address of inactive data block, inactive data block size and inactive data block
Command format after processor has generated configuration information, sends configuration information to data processing equipment;Data processing equipment according to
Partition by fine granularities, which is carried out, with data block size determines the number of transmissions;Followed by, data processing equipment is further according to the number of transmissions, stand-by
The source address of data block and the call instruction format of inactive data block call inactive data block;Finally, data processing equipment will
Inactive data block is stored in the destination address of inactive data block.As it can be seen that inactive data block directly accesses, so as to avoid multiplely
The comparison of location label, improves access efficiency, reduces power consumption.In addition, the probability of inactive data block certainty access increases, with
It is further reduced power consumption.The application can be improved data access efficiency as a result, reduce power consumption.
In some possible implementations, data processing equipment is true according to inactive data block size progress partition by fine granularities
Determining the number of transmissions can be with are as follows: data processing equipment carries out partition by fine granularities according to inactive data library size and determines burst (burst)
Number;The number of transmissions is determined according to burst number, wherein burst is a data packet, may include the data of 512byte.
In other possible implementations, inactive data block is stored in the mesh of inactive data block by data processing equipment
Address can be with are as follows: data processing equipment is pre-configured with multiple channels, and data processing equipment passes through the multiple channels configured will
Inactive data block is sent to the destination address of inactive data block;Inactive data block is stored by the destination address of inactive data block.
In other possible implementations, the destination address of inactive data block is external memory, inactive data block
Source address can be internal storage, then data processing equipment stores inactive data block by the destination address of inactive data block
It can be with are as follows: data processing equipment passes through external memory and stores inactive data block.
In other possible implementations, the destination address of inactive data block is internal storage, inactive data block
Source address can be external memory, then data processing equipment stores inactive data block by the destination address of inactive data block
It can be with are as follows: data processing equipment passes through internal storage and stores inactive data block.
In other possible implementations, after data processing equipment receives processor configuration information, at data
Reason device can pass through round-robin queue's cached configuration information.
In other possible implementations, if configuration information is also used to designation date processing unit, calling is discrete for use
Data, then data processing equipment can store stand-by discrete data by preset cache Cache.As it can be seen that by presetting Cache,
In the case where the specification of Cache reduces, the access flexibility of discrete data is still remained, the application scenarios of the application are expanded
Range.
The application second aspect provides a kind of data processing method, it may include: processor generates configuration information, matches confidence
Breath includes the source address of inactive data block, the destination address of inactive data block, inactive data block size and inactive data block
Call instruction format;Processor sends configuration information to data processing equipment, so that data processing equipment is according to configuration information tune
With inactive data block.
The application third aspect provides a kind of data processing equipment, which may be implemented above-mentioned first party
The function of method provided by face or any optional implementation of first aspect, which can be by software realization, software
Including module corresponding with above-mentioned function, each module is for executing corresponding function.
The application fourth aspect provides a kind of processor, which may be implemented above-mentioned second aspect or second aspect
The function of method provided by any optional implementation, the function can include and above-mentioned function by software realization, software
The corresponding module of energy, each module is for executing corresponding function.
The 5th aspect of the application provides a kind of computer storage medium, for being stored as used in above-mentioned data processing equipment
Computer software instructions comprising for executing journey designed by the function that data processing equipment is realized in above-mentioned various aspects
Sequence.
The 6th aspect of the application provides a kind of computer storage medium, for being stored as calculating used in above-mentioned processor
Machine software instruction comprising for executing program designed by the function that processor is realized in above-mentioned various aspects.
As can be seen from the above technical solutions, the embodiment of the present application has the advantage that inactive data block directly accesses, from
And the comparison of multiple address tags is avoided, access efficiency is improved, power consumption is reduced.In addition, the certainty of inactive data block
Access probability improves, and redundancy access amount is reduced, to further reduced power consumption.The application can be improved data visit as a result,
It asks efficiency, reduces power consumption.
Detailed description of the invention
It, below will be to attached drawing needed in embodiment description in order to illustrate more clearly of the technical solution of the application
It is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, general for this field
For logical technical staff, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of system architecture diagram of data processing method provided by the present application;
Fig. 2 is a kind of FDFU schematic diagram of internal structure provided by the present application;
Fig. 3 is a kind of condition managing schematic diagram of order caching administrative unit provided by the present application;
Fig. 4 is a kind of data processing method flow chart provided by the present application;
Fig. 5 is a kind of two-way FDFU schematic diagram of internal structure provided by the present application;
Fig. 6 is the configuration diagram that a kind of FDFU provided by the present application cooperates Cache mechanism;
Fig. 7 is a kind of data processing equipment structure chart provided by the present application;
Fig. 8 is another data processing equipment structure chart provided by the present application.
Specific embodiment
The embodiment of the present application provides a kind of data processing method and data processing equipment, can be improved data access effect
Rate reduces power consumption.
The description and claims of this application and term " first ", " second ", " third ", " in above-mentioned attached drawing
The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage
The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiments described herein can be in addition to illustrating herein
Or the sequence other than the content of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that
Cover it is non-exclusive include, for example, containing the process, method, system, product or equipment of a series of steps or units need not limit
In step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, produce
The other step or units of product or equipment inherently.
The system architecture of the data processing method in lower the application is described below, referring to Fig. 1, Fig. 1 provides for the application
A kind of data processing method system architecture diagram, include following part: memory, rapid data pre-fetch unit (Fast in Fig. 1
Data pre-Fetch Unit, FDFU) and kernel.By rapid configuration FDFU module, reach the effect of fine granularity moving data
Fruit.It should be noted that data processing equipment in the application can be the FDFU in Fig. 1, the processor in the application can be with
Including the kernel in Fig. 1.
Referring to Fig. 2, Fig. 2 is a kind of FDFU schematic diagram of internal structure provided by the present application, FDFU module may include: life
It enables receiving unit, order caching administrative unit, read to prefetch administrative unit, read data interactive unit and write data interaction unit.
Order receiving unit, be mainly responsible for receive from kernel come order, distribute logical identifier appropriate (Identity,
ID), and kernel is returned to;Order caching administrative unit will be inserted after the command analysis newly received simultaneously;
Order caching administrative unit is mainly responsible for the management of order caching, can cache 16 orders from kernel, and
Safeguard the execution state of 16 names, state management mechanisms may refer to Fig. 3, and in Fig. 3, order caching is a circulation team
Column, wherein the first signal (such as instr_buffer_head) indicates that the first place of round-robin queue is set, the order being directed toward is pending
Order;And the instruction for being carrying out and it is expected in order to return is indicated if second signal (such as instr_buffer_over), branch
The order ID for holding return is not the function of second signal, and is round-robin queue if third signal (such as instr_buffer_tail)
Tail address, next received data will be written into the position.Fig. 3 is an example of quene state, first in example
Signal is directed toward ID4, indicates that ID is that 4 order will be supplied to command analysis next time;And second signal is directed toward ID0, indicates that ID is 0
Order be carrying out (mode bit 1), in addition, the order that ID is 1 and 2 has executed completions, mode bit clearing;At this time if
After the completion of the order that ID is 0 executes, it should execute the order that ID is 3;Third signal is directed toward ID15, is only capable of volume in the round-robin queue
Receive an order again outside.Order caching administrative unit is responsible for the inquiry instruction of kernel, when corresponding ID mode bit is 0, table
Show that data-moving is completed, logical channel release;Otherwise indicate that data-moving does not complete, kernel needs to pause.
Reading prefetches administrative unit, is mainly responsible for from reading order in order memory management unit, maximum can support 4 lives
Enable continuous parsing (before 8 bus burst outstanding are finished);It is multiple by (data block) cutting of big data packet
burst;Record and safeguard state and 8 bus burst outstanding states that 4 orders execute;Read operation is received to return
The data returned;Merge, align data is to be adapted to the read-write bit wide of local memory.Reading prefetches administrative unit mainly by state machine reality
Existing, state machine includes 3 states, and each state meaning is as follows: first state: idle state waits command channel to be ready to, and
The source address, destination address and data block size of the current command, state transition to the second state are recorded under the premise of having order;The
Two-state: sending first burst request, when first burst is requested while being also the last one burst request, if number
It is less than or equal to default maximum burst size according to block size, default maximum burst size can be set to 512 bytes (byte), then
State machine jumps to first state;If data block size is greater than default maximum burst size, state machine jumps to third shape
State;The third state: the reading data received carry out stream treatment according to the label that state machine generates, order ID when representing some
All outstanding return and terminate, and after successfully writing back local memory, return corresponding order ID and give order caching pipe
Unit is managed, corresponding states is updated.
Data interactive unit is read, arbitration is mainly responsible for and prefetches request and directly read request, directly read and ask in the application
The priority asked is higher than the priority for prefetching request.
Data interaction unit is write, the data of 4 256bit can be cached, while supporting the merging function between multiple store
Can, the data cover of identical address is supported under the premise of no flush operation.
The data processing method in the application is described below by specific embodiment, referring to Fig. 4, in the application
Data processing method one embodiment includes:
101, data processing equipment receives processor configuration information, which is used to indicate data processing equipment calling
Inactive data block, the configuration information include that the source address of inactive data block, the destination address of inactive data block, inactive data block are big
Small and inactive data block call instruction format;
In the present embodiment, when software needs to access exterior space consecutive data block or a small amount of individual data, place
Device rapidly configuring data processing unit relevant information is managed, for example, the destination of the source address of inactive data block, inactive data block
The call instruction format of location, inactive data block size and inactive data block, later, after processor configures configuration information, to
Data processing equipment sends configuration information.
The relevant configuration of data processing equipment in the present embodiment can satisfy following requirement:
Support external memory to the data-moving of internal storage, moving data size can match, and maximum can be 64k.
The data-moving between external memory is not supported, does not support internal storage to the data-moving of external memory.
Single instrction completes the configuration of the relevant information of data processing equipment, including source address, destination address, size of data,
Auto-returned channel ID, for inquiry.
Single instrction is completed, and whether inquiry data-moving succeeds, if not succeeding, the kernel of processor blocks, and waits it
Preceding data-moving is completed.
Data processing equipment collocation channel number, such as 16, if 16 channels are all occupied when configuration, kernel resistance
Plug, the data-moving before waiting are completed.
The address of internal storage and external memory unified addressing, internal storage and external memory does not repeat.
The logical channel for supporting four tunnels parallel can safeguard the execution of 4 orders simultaneously.
It supports data block to divide, long data block is split as multiple burst.
Support 8 independent read operations, corresponding 8 buses outstanding, 8 burst requests.
It supports to write data pooling function in continuous 16 periods, supports the write buffer of the data of 4 256bit;It supports
Buffer empties function;
Support the high-order address 16bit of exterior arrangement internal storage.
In addition, processor supports the direct read and write operation of external memory, maximum that 2 external memories is supported directly to read to grasp
Make, maximum supports 32 direct write operations of external memory.
It should be noted that call instruction format can serve to indicate that inactive data block moves direction, such as by external empty
Between move to inner space, or exterior space is moved by inner space.
Processor can determine inactive data block size in the following way, for example processor can be according to exterior space
The distance between kernel of external memory and processor determines time granularity, then according to the unit time of time granularity
Interior data throughout determines the inactive data block size moved.
Processor can determine the destination address of inactive data block in the following way, for example processor is according to inactive data
Destination address is arranged by the method for creating local temporary variable in block size, and detailed process is as follows: according to each inactive data block
Size, defining temporary variable, (the continuous data block access of circulation pattern defines ping-pong buffer, and discontinuous variable can be defined directly
Local temporary variable);The source address of exterior space (global variable) and the destination address of local temporary variable are generated, setting is each
The size moved (consecutive data block is then the table tennis size that fine granularity is moved, and discontinuous variable is then the size of entire variable).
In practical applications, processor is connected by simply interactive internal bus with data processing equipment, data processing
Device is connected with external memory by bus, and data processing equipment is connected with internal storage by bus.
Optionally, in some possible embodiments, after data processing equipment receives processor configuration information, may be used also
To include:
Data processing equipment passes through round-robin queue's cached configuration information.
102, data processing equipment carries out partition by fine granularities according to inactive data block size and determines the number of transmissions;
In the present embodiment, data processing equipment can determine the number of transmissions in the following way: data processing equipment according to
Inactive data block size carries out partition by fine granularities and determines burst number;The number of transmissions is determined according to burst number.
Wherein, burst is a data packet, the data comprising 512byte.
103, calling of the data processing equipment further according to the number of transmissions, the source address of inactive data block and inactive data block
Command format calls inactive data block;
In the present embodiment, after data processing equipment determines the number of transmissions, data processing equipment further according to the number of transmissions, to
Inactive data block is called with the call instruction format of the source address of data block, inactive data block.
104, inactive data block is stored in the destination address of inactive data block by data processing equipment.
In the present embodiment, in data processing equipment further according to the number of transmissions, the source address of inactive data block, inactive data block
Call instruction format call inactive data block after, inactive data block is stored in the purpose of inactive data block by data processing equipment
Address.
Optionally, in some possible embodiments, inactive data block is stored in stand-by number by above-mentioned data processing equipment
It can be with according to the destination address of block are as follows:
Inactive data block is sent to the destination address of inactive data block by preconfigured channel by data processing equipment;
Data processing equipment stores inactive data block by the destination address of inactive data block.
In the present embodiment, data processing equipment is pre-configured with channel, and the maximum number in channel can be limited to 16.
Further, if the destination address of inactive data block be external memory, above-mentioned data processing equipment by
It can be with the destination address of data block storage inactive data block are as follows:
Data processing equipment stores inactive data block by external memory.
Or, above-mentioned data processing equipment passes through inactive data if the destination address of inactive data block is internal storage
The destination address storage inactive data block of block can be with are as follows:
Data processing equipment stores inactive data block by internal storage.
In the present embodiment, data processing equipment supports external memory to the data-moving between internal storage, also props up
Internal storage is held to the data-moving between external memory, i.e. the external of support inactive data block is moved.
A kind of two-way FDFU schematic diagram of internal structure externally moved for supporting inactive data block is present embodiments provided, is had
Body may refer to Fig. 5, and Fig. 5 is a kind of two-way FDFU schematic diagram of internal structure provided by the present application, and FDFU shown in Fig. 5 is suitable
Data processing equipment in the application.Data block under two-way FDFU internal structure shown in fig. 5 supports bidirectional two-dimensional to move,
It is two-way refer to external memory internally memory moving data block or internal storage to external memory moving data block,
Two dimension refers to moving for fixed intervals, i.e., according to the data in fixed intervals moving data block, or the offset moved every time
Value is an arithmetic progression, and the fixed intervals and deviant can be configured by processor.
Optionally, in some possible embodiments, if configuration information is also used to designation date processing unit and calls for use
Discrete data then can also include:
Data processing equipment stores stand-by discrete data by preset cache Cache.
In the present embodiment, by presetting Cache, in the case where the specification of Cache reduces, discrete data is still remained
Flexibility is accessed, the application scenarios range of the application is expanded.
A kind of configuration diagram of FDFU cooperation Cache mechanism for supporting discrete data to access provided in this embodiment, tool
Body may refer to Fig. 6, and Fig. 6 is the configuration diagram that a kind of FDFU provided by the present application cooperates Cache mechanism, shown in Fig. 6
FDFU is equivalent to the data processing equipment in the application.
In the present embodiment, inactive data block is directly accessed, and so as to avoid the comparison of multiple address tags, improves access
Efficiency reduces power consumption.In addition, the certainty access probability of inactive data block improves, redundancy access amount is reduced, thus into one
Step reduces power consumption.The application can be improved data access efficiency as a result, reduce power consumption.
Secondly, supporting externally moving for inactive data block, the efficiency of external write operation is improved, reduces bus complexity
Degree.
Finally, remaining the access flexibility of discrete data, the application scenarios range of the application is expanded.
The data processing method in the application is described by embodiment above, is introduced in the application below by embodiment
Data processing equipment, referring to Fig. 7, data processing equipment one embodiment includes: in the application
Receiving module 201, for receiving processor configuration information, configuration information be used to indicate data processing equipment call to
With data block, configuration information include the source address of inactive data block, the destination address of inactive data block, inactive data block size with
And the call instruction format of inactive data block;
Determining module 202 determines the number of transmissions for carrying out partition by fine granularities according to inactive data block size;
Calling module 203, for the calling further according to the number of transmissions, the source address of inactive data block and inactive data block
Command format calls inactive data block;
Memory module 204, for inactive data block to be stored in the destination address of inactive data block.
In the present embodiment, inactive data block is directly accessed, and so as to avoid the comparison of multiple address tags, improves access
Efficiency reduces power consumption.In addition, the certainty access probability of inactive data block improves, redundancy access amount is reduced, thus into one
Step reduces power consumption.The application can be improved data access efficiency as a result, reduce power consumption.
Further, in some possible embodiments, determining module 202 is specifically used for according to inactive data block size
It carries out partition by fine granularities and determines burst number;The number of transmissions is determined according to burst number.
Further, in some possible embodiments, memory module 204 is specifically used for passing through preconfigured channel
Inactive data block is sent to the destination address of inactive data block;Inactive data is stored by the destination address of inactive data block
Block.
Further, in some possible embodiments, the destination address of inactive data block is external memory, stores mould
Block 204 is specifically used for storing inactive data block by external memory.
Further, in some possible embodiments, the destination address of inactive data block is internal storage, stores mould
Block 204 is specifically used for storing inactive data block by internal storage.
Further, in some possible embodiments, memory module 204 is also used to through round-robin queue's cached configuration
Information.
Further, in some possible embodiments, if configuration information be also used to designation date processing unit call to
With discrete data, memory module 204 is also used to store stand-by discrete data by preset cache Cache.
As it can be seen that, in the case where the specification of Cache reduces, still remaining the access spirit of discrete data by presetting Cache
Activity expands the application scenarios range of the application.
The data processing equipment in the application is described from the angle of modular functionality entity above, below from hard
The data processing equipment in the application is described in the angle of part processing, referring to Fig. 8, the data processing equipment in the application
It include: receiver 301, processor 302 and memory 303.
This application involves data processing equipment can have than more or fewer components illustrated in fig. 8, can group
Close two or more components, or can have different components and configure or set up, all parts can include one or
The combination of hardware, software or hardware and software including multiple signal processings and/or specific integrated circuit is realized.
Receiver 301 is for performing the following operations:
Processor configuration information is received, configuration information is used to indicate data processing equipment and calls inactive data block, matches confidence
Breath includes the source address of inactive data block, the destination address of inactive data block, inactive data block size and inactive data block
Call instruction format;
Processor 302 is for performing the following operations:
Partition by fine granularities, which is carried out, according to inactive data block size determines the number of transmissions;
It is called for use further according to the call instruction format of the number of transmissions, the source address of inactive data block and inactive data block
Data block;
Memory 303 is for performing the following operations:
Inactive data block is stored in the destination address of inactive data block.
In the present embodiment, inactive data block is directly accessed, and so as to avoid the comparison of multiple address tags, improves access
Efficiency reduces power consumption.In addition, the certainty access probability of inactive data block improves, redundancy access amount is reduced, thus into one
Step reduces power consumption.The application can be improved data access efficiency as a result, reduce power consumption.
Processor 302 is also used to perform the following operations:
Partition by fine granularities, which is carried out, according to inactive data block size determines burst number;Transmission time is determined according to burst number
Number.
Memory 303 is also used to perform the following operations:
Inactive data block is sent to the destination address of inactive data block by preconfigured channel;Pass through inactive data
The destination address of block stores inactive data block.
Memory 303 is also used to perform the following operations:
When the destination address of inactive data block is external memory, inactive data block is stored by external memory.
Memory 303 is also used to perform the following operations:
When the destination address of inactive data block is internal storage, inactive data block is stored by internal storage.
Memory 303 is also used to perform the following operations:
Pass through round-robin queue's cached configuration information.
Memory 303 is also used to perform the following operations:
If configuration information is also used to designation date processing unit and calls stand-by discrete data, pass through preset cache Cache
Store stand-by discrete data.
In the above-described embodiments, it can be realized wholly or partly by software, hardware or a combination thereof.When using soft
When part or hardware and software combination are realized, can entirely or partly it realize in the form of a computer program product.
The computer program product includes one or more computer instructions.Load and execute on computers the meter
When calculation machine program instruction, entirely or partly generate according to process or function described in the embodiment of the present application.The computer can
To be general purpose computer, special purpose computer, computer network or other programmable devices.The computer instruction can be deposited
Storage is transmitted in storage medium, or from a storage medium to another storage medium.For example, the computer instruction can be from
One web-site, computer, server or data center pass through wired (such as coaxial cable, twisted pair, optical fiber) or wireless
(such as infrared, wireless, microwave etc.) mode is transmitted to another web-site, computer, server or data center.Institute
State storage medium can be any medium that computer can store or include the integrated server of one or more media,
The data storage devices such as data center.The medium can be magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (example
Such as, CD) or semiconductor medium (such as solid state hard disk (SSD)) etc..
Those of ordinary skill in the art can be understood that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
On the network equipment.Some or all of unit therein can be selected to realize the embodiment of the present application scheme according to the actual needs
Purpose.
Relevant portion can be referred to mutually between each embodiment of the application, comprising: relevant portion can between embodiment of the method
Mutually to refer to;Device provided by each Installation practice is used to execute method provided by corresponding embodiment of the method, therefore each
Installation practice can be understood with reference to the relevant portion in relevant embodiment of the method.
The structure drawing of device provided in each Installation practice of the application illustrates only simplifying for corresponding device and designs.In reality
In the application of border, which may include any number of communication module, processor, memory etc., to realize each device of the application
Functions or operations performed by the device in embodiment, and all devices that the application may be implemented are all in the protection model of the application
Within enclosing.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations.This field is common
Technical staff is possible to modify the technical solutions described in the foregoing embodiments, and these are modified, and relevant art is not made
Scheme is detached from the scope of the claims.
Claims (15)
1. a kind of data processing method characterized by comprising
Data processing equipment receives processor configuration information, the configuration information be used to indicate the data processing equipment call to
With data block, the configuration information includes the destination address, described of the source address of the inactive data block, the inactive data block
The call instruction format of inactive data block size and the inactive data block;
The data processing equipment carries out partition by fine granularities according to the inactive data block size and determines the number of transmissions;
The data processing equipment is further according to the number of transmissions, the source address of the inactive data block and the inactive data block
Call instruction format calls the inactive data block;
The inactive data block is stored in the destination address of the inactive data block by the data processing equipment.
2. the method according to claim 1, wherein the data processing equipment is big according to the inactive data block
Small progress partition by fine granularities determines that the number of transmissions includes:
The data processing equipment carries out partition by fine granularities according to the inactive data block size and determines burst number of burst;
The number of transmissions is determined according to described burst number.
3. the method according to claim 1, wherein the data processing equipment stores the inactive data block
Include: in the destination address of the inactive data block
The inactive data block is sent to the inactive data block by preconfigured channel by the data processing equipment
Destination address;
The data processing equipment stores the inactive data block by the destination address of the inactive data block.
4. according to the method described in claim 3, it is characterized in that, the destination address of the inactive data block is external storage
Device, the data processing equipment store the inactive data block by the destination address of the inactive data block and include:
The data processing equipment stores the inactive data block by the external memory.
5. according to the method described in claim 3, it is characterized in that, the destination address of the inactive data block is storage inside
Device, the data processing equipment store the inactive data block by the destination address of the inactive data block and include:
The data processing equipment stores the inactive data block by the internal storage.
6. method according to any one of claims 1 to 5, which is characterized in that the data processing equipment receives processor
Include: after configuration information
The data processing equipment caches the configuration information by round-robin queue.
7. method according to any one of claims 1 to 5, which is characterized in that if the configuration information is also used to indicate institute
State data processing equipment and call stand-by discrete data, then the method also includes:
The data processing equipment stores the stand-by discrete data by preset cache Cache.
8. a kind of data processing equipment characterized by comprising
Receiving module, for receiving processor configuration information, the configuration information is used to indicate the data processing equipment and calls
Inactive data block, the configuration information include the source address of the inactive data block, the destination address of the inactive data block, institute
State the call instruction format of inactive data block size and the inactive data block;
Determining module determines the number of transmissions for carrying out partition by fine granularities according to the inactive data block size;
Calling module, the tune for source address and the inactive data block further according to the number of transmissions, the inactive data block
The inactive data block is called with command format;
Memory module, for the inactive data block to be stored in the destination address of the inactive data block.
9. device according to claim 8, which is characterized in that the determining module is specifically used for according to the stand-by number
Partition by fine granularities, which is carried out, according to block size determines burst number of burst;The number of transmissions is determined according to described burst number.
10. device according to claim 8, which is characterized in that the memory module is specifically used for by preconfigured
The inactive data block is sent to the destination address of the inactive data block by channel;Pass through the destination of the inactive data block
Location stores the inactive data block.
11. device according to claim 10, which is characterized in that the destination address of the inactive data block is external storage
Device, the memory module are specifically used for storing the inactive data block by the external memory.
12. device according to claim 10, which is characterized in that the destination address of the inactive data block is storage inside
Device, the memory module are specifically used for storing the inactive data block by the internal storage.
13. according to the described in any item devices of claim 8 to 12, which is characterized in that the memory module is also used to by following
Ring queue caches the configuration information.
14. according to the described in any item devices of claim 8 to 12, which is characterized in that if the configuration information is also used to indicate
The data processing equipment calls stand-by discrete data, and the memory module is also used to by described in preset cache Cache storage
Stand-by discrete data.
15. a kind of computer storage medium, is stored thereon with computer program, which is characterized in that the program is executed by processor
The step of any one of Shi Shixian such as claim 1 to 7 the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710640687.9A CN109324982B (en) | 2017-07-31 | 2017-07-31 | Data processing method and data processing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710640687.9A CN109324982B (en) | 2017-07-31 | 2017-07-31 | Data processing method and data processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109324982A true CN109324982A (en) | 2019-02-12 |
CN109324982B CN109324982B (en) | 2023-06-27 |
Family
ID=65245648
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710640687.9A Active CN109324982B (en) | 2017-07-31 | 2017-07-31 | Data processing method and data processing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109324982B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113407357A (en) * | 2020-03-17 | 2021-09-17 | 华为技术有限公司 | Method and device for inter-process data movement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103500107A (en) * | 2013-09-29 | 2014-01-08 | 中国船舶重工集团公司第七0九研究所 | Hardware optimization method for CPU |
US20140129766A1 (en) * | 2012-11-08 | 2014-05-08 | Qualcomm Incorporated | Intelligent dual data rate (ddr) memory controller |
CN104252420A (en) * | 2013-06-29 | 2014-12-31 | 华为技术有限公司 | Data writing method and memory system |
WO2015180649A1 (en) * | 2014-05-30 | 2015-12-03 | 华为技术有限公司 | Method for moving data between storage devices, controller and storage system |
-
2017
- 2017-07-31 CN CN201710640687.9A patent/CN109324982B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140129766A1 (en) * | 2012-11-08 | 2014-05-08 | Qualcomm Incorporated | Intelligent dual data rate (ddr) memory controller |
CN104252420A (en) * | 2013-06-29 | 2014-12-31 | 华为技术有限公司 | Data writing method and memory system |
CN103500107A (en) * | 2013-09-29 | 2014-01-08 | 中国船舶重工集团公司第七0九研究所 | Hardware optimization method for CPU |
WO2015180649A1 (en) * | 2014-05-30 | 2015-12-03 | 华为技术有限公司 | Method for moving data between storage devices, controller and storage system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113407357A (en) * | 2020-03-17 | 2021-09-17 | 华为技术有限公司 | Method and device for inter-process data movement |
CN113407357B (en) * | 2020-03-17 | 2023-08-22 | 华为技术有限公司 | Method and device for inter-process data movement |
Also Published As
Publication number | Publication date |
---|---|
CN109324982B (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100481028C (en) | Method and device for implementing data storage using cache | |
US10223254B1 (en) | Namespace change propagation in non-volatile memory devices | |
US8935478B2 (en) | Variable cache line size management | |
JP6768928B2 (en) | Methods and devices for compressing addresses | |
US8949529B2 (en) | Customizing function behavior based on cache and scheduling parameters of a memory argument | |
US10496550B2 (en) | Multi-port shared cache apparatus | |
CN102841854A (en) | Method and system for executing data reading based on dynamic hierarchical memory cache (hmc) awareness | |
CN108496161A (en) | Data buffer storage device and control method, data processing chip, data processing system | |
CN112632069B (en) | Hash table data storage management method, device, medium and electronic equipment | |
CN103678523A (en) | Distributed cache data access method and device | |
CN113641596B (en) | Cache management method, cache management device and processor | |
JP2004110503A (en) | Memory control device, memory system, control method for memory control device, channel control part and program | |
US9342258B2 (en) | Integrated circuit device and method for providing data access control | |
JP3460617B2 (en) | File control unit | |
CN112997161A (en) | Method and apparatus for using storage system as main memory | |
CN111406251B (en) | Data prefetching method and device | |
KR102617154B1 (en) | Snoop filter with stored replacement information, method for same, and system including victim exclusive cache and snoop filter shared replacement policies | |
CN109324982A (en) | A kind of data processing method and data processing equipment | |
JP4431492B2 (en) | Data transfer unit that supports multiple coherency granules | |
CN117331858B (en) | Storage device and data processing system | |
CN117312201B (en) | Data transmission method and device, accelerator equipment, host and storage medium | |
US11947418B2 (en) | Remote access array | |
EP2728485B1 (en) | Multi-Port Shared Cache Apparatus | |
JP2009088622A (en) | Packet transfer device having buffer memory and method thereof | |
KR101591583B1 (en) | Method and apparatus for caching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |